00:00:00.001 Started by upstream project "autotest-per-patch" build number 132410 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.044 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.045 The recommended git tool is: git 00:00:00.045 using credential 00000000-0000-0000-0000-000000000002 00:00:00.047 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.072 Fetching changes from the remote Git repository 00:00:00.077 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.113 Using shallow fetch with depth 1 00:00:00.113 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.113 > git --version # timeout=10 00:00:00.183 > git --version # 'git version 2.39.2' 00:00:00.183 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.238 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.238 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.476 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.490 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.505 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:04.505 > git config core.sparsecheckout # timeout=10 00:00:04.518 > git read-tree -mu HEAD # timeout=10 00:00:04.540 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:04.565 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:04.566 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:04.663 [Pipeline] Start of Pipeline 00:00:04.680 [Pipeline] library 00:00:04.682 Loading library shm_lib@master 00:00:04.682 Library shm_lib@master is cached. Copying from home. 00:00:04.701 [Pipeline] node 00:00:04.707 Running on VM-host-WFP1 in /var/jenkins/workspace/raid-vg-autotest_2 00:00:04.710 [Pipeline] { 00:00:04.721 [Pipeline] catchError 00:00:04.723 [Pipeline] { 00:00:04.737 [Pipeline] wrap 00:00:04.747 [Pipeline] { 00:00:04.756 [Pipeline] stage 00:00:04.758 [Pipeline] { (Prologue) 00:00:04.779 [Pipeline] echo 00:00:04.781 Node: VM-host-WFP1 00:00:04.787 [Pipeline] cleanWs 00:00:04.798 [WS-CLEANUP] Deleting project workspace... 00:00:04.798 [WS-CLEANUP] Deferred wipeout is used... 00:00:04.804 [WS-CLEANUP] done 00:00:05.042 [Pipeline] setCustomBuildProperty 00:00:05.131 [Pipeline] httpRequest 00:00:05.534 [Pipeline] echo 00:00:05.536 Sorcerer 10.211.164.20 is alive 00:00:05.545 [Pipeline] retry 00:00:05.548 [Pipeline] { 00:00:05.563 [Pipeline] httpRequest 00:00:05.568 HttpMethod: GET 00:00:05.568 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:05.568 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:05.569 Response Code: HTTP/1.1 200 OK 00:00:05.570 Success: Status code 200 is in the accepted range: 200,404 00:00:05.570 Saving response body to /var/jenkins/workspace/raid-vg-autotest_2/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:05.742 [Pipeline] } 00:00:05.761 [Pipeline] // retry 00:00:05.770 [Pipeline] sh 00:00:06.094 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:06.107 [Pipeline] httpRequest 00:00:06.419 [Pipeline] echo 00:00:06.421 Sorcerer 10.211.164.20 is alive 00:00:06.431 [Pipeline] retry 00:00:06.433 [Pipeline] { 00:00:06.445 [Pipeline] httpRequest 00:00:06.449 HttpMethod: GET 00:00:06.449 URL: http://10.211.164.20/packages/spdk_1981e6eeca201ec19c7d70102797d8ef3cab85cb.tar.gz 00:00:06.450 Sending request to url: http://10.211.164.20/packages/spdk_1981e6eeca201ec19c7d70102797d8ef3cab85cb.tar.gz 00:00:06.451 Response Code: HTTP/1.1 200 OK 00:00:06.451 Success: Status code 200 is in the accepted range: 200,404 00:00:06.452 Saving response body to /var/jenkins/workspace/raid-vg-autotest_2/spdk_1981e6eeca201ec19c7d70102797d8ef3cab85cb.tar.gz 00:00:20.188 [Pipeline] } 00:00:20.207 [Pipeline] // retry 00:00:20.214 [Pipeline] sh 00:00:20.493 + tar --no-same-owner -xf spdk_1981e6eeca201ec19c7d70102797d8ef3cab85cb.tar.gz 00:00:23.042 [Pipeline] sh 00:00:23.326 + git -C spdk log --oneline -n5 00:00:23.326 1981e6eec bdevperf: Add hide_metadata option 00:00:23.326 66a383faf bdevperf: Get metadata config by not bdev but bdev_desc 00:00:23.326 25916e30c bdevperf: Store the result of DIF type check into job structure 00:00:23.326 bd9804982 bdevperf: g_main_thread calls bdev_open() instead of job->thread 00:00:23.326 2e015e34f bdevperf: Remove TAILQ_REMOVE which may result in potential memory leak 00:00:23.342 [Pipeline] writeFile 00:00:23.354 [Pipeline] sh 00:00:23.664 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:00:23.676 [Pipeline] sh 00:00:23.961 + cat autorun-spdk.conf 00:00:23.961 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:23.961 SPDK_RUN_ASAN=1 00:00:23.961 SPDK_RUN_UBSAN=1 00:00:23.961 SPDK_TEST_RAID=1 00:00:23.961 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:23.968 RUN_NIGHTLY=0 00:00:23.970 [Pipeline] } 00:00:23.985 [Pipeline] // stage 00:00:24.005 [Pipeline] stage 00:00:24.006 [Pipeline] { (Run VM) 00:00:24.019 [Pipeline] sh 00:00:24.302 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:00:24.302 + echo 'Start stage prepare_nvme.sh' 00:00:24.302 Start stage prepare_nvme.sh 00:00:24.302 + [[ -n 2 ]] 00:00:24.302 + disk_prefix=ex2 00:00:24.302 + [[ -n /var/jenkins/workspace/raid-vg-autotest_2 ]] 00:00:24.302 + [[ -e /var/jenkins/workspace/raid-vg-autotest_2/autorun-spdk.conf ]] 00:00:24.302 + source /var/jenkins/workspace/raid-vg-autotest_2/autorun-spdk.conf 00:00:24.302 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:24.302 ++ SPDK_RUN_ASAN=1 00:00:24.302 ++ SPDK_RUN_UBSAN=1 00:00:24.302 ++ SPDK_TEST_RAID=1 00:00:24.302 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:24.302 ++ RUN_NIGHTLY=0 00:00:24.302 + cd /var/jenkins/workspace/raid-vg-autotest_2 00:00:24.302 + nvme_files=() 00:00:24.302 + declare -A nvme_files 00:00:24.302 + backend_dir=/var/lib/libvirt/images/backends 00:00:24.302 + nvme_files['nvme.img']=5G 00:00:24.302 + nvme_files['nvme-cmb.img']=5G 00:00:24.302 + nvme_files['nvme-multi0.img']=4G 00:00:24.302 + nvme_files['nvme-multi1.img']=4G 00:00:24.302 + nvme_files['nvme-multi2.img']=4G 00:00:24.302 + nvme_files['nvme-openstack.img']=8G 00:00:24.302 + nvme_files['nvme-zns.img']=5G 00:00:24.302 + (( SPDK_TEST_NVME_PMR == 1 )) 00:00:24.302 + (( SPDK_TEST_FTL == 1 )) 00:00:24.302 + (( SPDK_TEST_NVME_FDP == 1 )) 00:00:24.302 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:00:24.302 + for nvme in "${!nvme_files[@]}" 00:00:24.302 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi2.img -s 4G 00:00:24.302 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:00:24.302 + for nvme in "${!nvme_files[@]}" 00:00:24.302 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-cmb.img -s 5G 00:00:24.302 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:00:24.302 + for nvme in "${!nvme_files[@]}" 00:00:24.302 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-openstack.img -s 8G 00:00:24.302 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:00:24.302 + for nvme in "${!nvme_files[@]}" 00:00:24.302 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-zns.img -s 5G 00:00:24.870 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:00:24.870 + for nvme in "${!nvme_files[@]}" 00:00:24.870 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi1.img -s 4G 00:00:24.870 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:00:24.870 + for nvme in "${!nvme_files[@]}" 00:00:24.870 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi0.img -s 4G 00:00:24.870 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:00:24.870 + for nvme in "${!nvme_files[@]}" 00:00:24.870 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme.img -s 5G 00:00:25.438 Formatting '/var/lib/libvirt/images/backends/ex2-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:00:25.438 ++ sudo grep -rl ex2-nvme.img /etc/libvirt/qemu 00:00:25.438 + echo 'End stage prepare_nvme.sh' 00:00:25.438 End stage prepare_nvme.sh 00:00:25.449 [Pipeline] sh 00:00:25.733 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:00:25.733 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex2-nvme.img -b /var/lib/libvirt/images/backends/ex2-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex2-nvme-multi1.img:/var/lib/libvirt/images/backends/ex2-nvme-multi2.img -H -a -v -f fedora39 00:00:25.733 00:00:25.733 DIR=/var/jenkins/workspace/raid-vg-autotest_2/spdk/scripts/vagrant 00:00:25.733 SPDK_DIR=/var/jenkins/workspace/raid-vg-autotest_2/spdk 00:00:25.733 VAGRANT_TARGET=/var/jenkins/workspace/raid-vg-autotest_2 00:00:25.733 HELP=0 00:00:25.733 DRY_RUN=0 00:00:25.733 NVME_FILE=/var/lib/libvirt/images/backends/ex2-nvme.img,/var/lib/libvirt/images/backends/ex2-nvme-multi0.img, 00:00:25.733 NVME_DISKS_TYPE=nvme,nvme, 00:00:25.733 NVME_AUTO_CREATE=0 00:00:25.733 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex2-nvme-multi1.img:/var/lib/libvirt/images/backends/ex2-nvme-multi2.img, 00:00:25.733 NVME_CMB=,, 00:00:25.733 NVME_PMR=,, 00:00:25.733 NVME_ZNS=,, 00:00:25.733 NVME_MS=,, 00:00:25.733 NVME_FDP=,, 00:00:25.733 SPDK_VAGRANT_DISTRO=fedora39 00:00:25.733 SPDK_VAGRANT_VMCPU=10 00:00:25.733 SPDK_VAGRANT_VMRAM=12288 00:00:25.733 SPDK_VAGRANT_PROVIDER=libvirt 00:00:25.733 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:00:25.733 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:00:25.733 SPDK_OPENSTACK_NETWORK=0 00:00:25.733 VAGRANT_PACKAGE_BOX=0 00:00:25.733 VAGRANTFILE=/var/jenkins/workspace/raid-vg-autotest_2/spdk/scripts/vagrant/Vagrantfile 00:00:25.733 FORCE_DISTRO=true 00:00:25.733 VAGRANT_BOX_VERSION= 00:00:25.733 EXTRA_VAGRANTFILES= 00:00:25.733 NIC_MODEL=e1000 00:00:25.733 00:00:25.733 mkdir: created directory '/var/jenkins/workspace/raid-vg-autotest_2/fedora39-libvirt' 00:00:25.733 /var/jenkins/workspace/raid-vg-autotest_2/fedora39-libvirt /var/jenkins/workspace/raid-vg-autotest_2 00:00:29.024 Bringing machine 'default' up with 'libvirt' provider... 00:00:29.593 ==> default: Creating image (snapshot of base box volume). 00:00:29.852 ==> default: Creating domain with the following settings... 00:00:29.852 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1732115295_53e56dc6264c61ba037a 00:00:29.852 ==> default: -- Domain type: kvm 00:00:29.852 ==> default: -- Cpus: 10 00:00:29.853 ==> default: -- Feature: acpi 00:00:29.853 ==> default: -- Feature: apic 00:00:29.853 ==> default: -- Feature: pae 00:00:29.853 ==> default: -- Memory: 12288M 00:00:29.853 ==> default: -- Memory Backing: hugepages: 00:00:29.853 ==> default: -- Management MAC: 00:00:29.853 ==> default: -- Loader: 00:00:29.853 ==> default: -- Nvram: 00:00:29.853 ==> default: -- Base box: spdk/fedora39 00:00:29.853 ==> default: -- Storage pool: default 00:00:29.853 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1732115295_53e56dc6264c61ba037a.img (20G) 00:00:29.853 ==> default: -- Volume Cache: default 00:00:29.853 ==> default: -- Kernel: 00:00:29.853 ==> default: -- Initrd: 00:00:29.853 ==> default: -- Graphics Type: vnc 00:00:29.853 ==> default: -- Graphics Port: -1 00:00:29.853 ==> default: -- Graphics IP: 127.0.0.1 00:00:29.853 ==> default: -- Graphics Password: Not defined 00:00:29.853 ==> default: -- Video Type: cirrus 00:00:29.853 ==> default: -- Video VRAM: 9216 00:00:29.853 ==> default: -- Sound Type: 00:00:29.853 ==> default: -- Keymap: en-us 00:00:29.853 ==> default: -- TPM Path: 00:00:29.853 ==> default: -- INPUT: type=mouse, bus=ps2 00:00:29.853 ==> default: -- Command line args: 00:00:29.853 ==> default: -> value=-device, 00:00:29.853 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:00:29.853 ==> default: -> value=-drive, 00:00:29.853 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme.img,if=none,id=nvme-0-drive0, 00:00:29.853 ==> default: -> value=-device, 00:00:29.853 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:29.853 ==> default: -> value=-device, 00:00:29.853 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:00:29.853 ==> default: -> value=-drive, 00:00:29.853 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:00:29.853 ==> default: -> value=-device, 00:00:29.853 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:29.853 ==> default: -> value=-drive, 00:00:29.853 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:00:29.853 ==> default: -> value=-device, 00:00:29.853 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:29.853 ==> default: -> value=-drive, 00:00:29.853 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:00:29.853 ==> default: -> value=-device, 00:00:29.853 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:30.420 ==> default: Creating shared folders metadata... 00:00:30.420 ==> default: Starting domain. 00:00:31.797 ==> default: Waiting for domain to get an IP address... 00:00:49.889 ==> default: Waiting for SSH to become available... 00:00:49.889 ==> default: Configuring and enabling network interfaces... 00:00:54.084 default: SSH address: 192.168.121.133:22 00:00:54.084 default: SSH username: vagrant 00:00:54.084 default: SSH auth method: private key 00:00:56.627 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest_2/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:06.606 ==> default: Mounting SSHFS shared folder... 00:01:07.544 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest_2/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:01:07.544 ==> default: Checking Mount.. 00:01:08.958 ==> default: Folder Successfully Mounted! 00:01:08.958 ==> default: Running provisioner: file... 00:01:09.893 default: ~/.gitconfig => .gitconfig 00:01:10.461 00:01:10.461 SUCCESS! 00:01:10.461 00:01:10.461 cd to /var/jenkins/workspace/raid-vg-autotest_2/fedora39-libvirt and type "vagrant ssh" to use. 00:01:10.461 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:10.461 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/raid-vg-autotest_2/fedora39-libvirt" to destroy all trace of vm. 00:01:10.461 00:01:10.470 [Pipeline] } 00:01:10.485 [Pipeline] // stage 00:01:10.494 [Pipeline] dir 00:01:10.495 Running in /var/jenkins/workspace/raid-vg-autotest_2/fedora39-libvirt 00:01:10.496 [Pipeline] { 00:01:10.509 [Pipeline] catchError 00:01:10.510 [Pipeline] { 00:01:10.522 [Pipeline] sh 00:01:10.826 + vagrant ssh-config --host vagrant 00:01:10.826 + sed -ne /^Host/,$p 00:01:10.826 + tee ssh_conf 00:01:14.112 Host vagrant 00:01:14.112 HostName 192.168.121.133 00:01:14.112 User vagrant 00:01:14.112 Port 22 00:01:14.112 UserKnownHostsFile /dev/null 00:01:14.112 StrictHostKeyChecking no 00:01:14.112 PasswordAuthentication no 00:01:14.112 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:01:14.112 IdentitiesOnly yes 00:01:14.112 LogLevel FATAL 00:01:14.112 ForwardAgent yes 00:01:14.112 ForwardX11 yes 00:01:14.112 00:01:14.127 [Pipeline] withEnv 00:01:14.129 [Pipeline] { 00:01:14.142 [Pipeline] sh 00:01:14.423 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:01:14.423 source /etc/os-release 00:01:14.423 [[ -e /image.version ]] && img=$(< /image.version) 00:01:14.423 # Minimal, systemd-like check. 00:01:14.423 if [[ -e /.dockerenv ]]; then 00:01:14.423 # Clear garbage from the node's name: 00:01:14.423 # agt-er_autotest_547-896 -> autotest_547-896 00:01:14.423 # $HOSTNAME is the actual container id 00:01:14.423 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:14.423 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:14.423 # We can assume this is a mount from a host where container is running, 00:01:14.423 # so fetch its hostname to easily identify the target swarm worker. 00:01:14.423 container="$(< /etc/hostname) ($agent)" 00:01:14.423 else 00:01:14.423 # Fallback 00:01:14.423 container=$agent 00:01:14.423 fi 00:01:14.423 fi 00:01:14.423 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:14.423 00:01:14.726 [Pipeline] } 00:01:14.739 [Pipeline] // withEnv 00:01:14.747 [Pipeline] setCustomBuildProperty 00:01:14.757 [Pipeline] stage 00:01:14.759 [Pipeline] { (Tests) 00:01:14.773 [Pipeline] sh 00:01:15.051 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:15.322 [Pipeline] sh 00:01:15.604 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:01:15.878 [Pipeline] timeout 00:01:15.879 Timeout set to expire in 1 hr 30 min 00:01:15.881 [Pipeline] { 00:01:15.896 [Pipeline] sh 00:01:16.177 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:01:16.746 HEAD is now at 1981e6eec bdevperf: Add hide_metadata option 00:01:16.758 [Pipeline] sh 00:01:17.039 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:01:17.311 [Pipeline] sh 00:01:17.591 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest_2/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:17.912 [Pipeline] sh 00:01:18.194 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=raid-vg-autotest ./autoruner.sh spdk_repo 00:01:18.453 ++ readlink -f spdk_repo 00:01:18.453 + DIR_ROOT=/home/vagrant/spdk_repo 00:01:18.453 + [[ -n /home/vagrant/spdk_repo ]] 00:01:18.453 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:01:18.453 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:01:18.453 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:01:18.453 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:01:18.453 + [[ -d /home/vagrant/spdk_repo/output ]] 00:01:18.453 + [[ raid-vg-autotest == pkgdep-* ]] 00:01:18.453 + cd /home/vagrant/spdk_repo 00:01:18.453 + source /etc/os-release 00:01:18.453 ++ NAME='Fedora Linux' 00:01:18.454 ++ VERSION='39 (Cloud Edition)' 00:01:18.454 ++ ID=fedora 00:01:18.454 ++ VERSION_ID=39 00:01:18.454 ++ VERSION_CODENAME= 00:01:18.454 ++ PLATFORM_ID=platform:f39 00:01:18.454 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:18.454 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:18.454 ++ LOGO=fedora-logo-icon 00:01:18.454 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:18.454 ++ HOME_URL=https://fedoraproject.org/ 00:01:18.454 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:18.454 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:18.454 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:18.454 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:18.454 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:18.454 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:18.454 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:18.454 ++ SUPPORT_END=2024-11-12 00:01:18.454 ++ VARIANT='Cloud Edition' 00:01:18.454 ++ VARIANT_ID=cloud 00:01:18.454 + uname -a 00:01:18.454 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:18.454 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:01:19.021 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:01:19.021 Hugepages 00:01:19.021 node hugesize free / total 00:01:19.021 node0 1048576kB 0 / 0 00:01:19.021 node0 2048kB 0 / 0 00:01:19.021 00:01:19.021 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:19.021 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:01:19.021 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:01:19.021 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:01:19.021 + rm -f /tmp/spdk-ld-path 00:01:19.021 + source autorun-spdk.conf 00:01:19.021 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:19.021 ++ SPDK_RUN_ASAN=1 00:01:19.021 ++ SPDK_RUN_UBSAN=1 00:01:19.021 ++ SPDK_TEST_RAID=1 00:01:19.021 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:19.021 ++ RUN_NIGHTLY=0 00:01:19.021 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:19.021 + [[ -n '' ]] 00:01:19.021 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:01:19.021 + for M in /var/spdk/build-*-manifest.txt 00:01:19.021 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:19.021 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:19.021 + for M in /var/spdk/build-*-manifest.txt 00:01:19.021 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:19.021 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:19.281 + for M in /var/spdk/build-*-manifest.txt 00:01:19.281 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:19.281 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:19.281 ++ uname 00:01:19.281 + [[ Linux == \L\i\n\u\x ]] 00:01:19.281 + sudo dmesg -T 00:01:19.281 + sudo dmesg --clear 00:01:19.281 + dmesg_pid=5208 00:01:19.281 + sudo dmesg -Tw 00:01:19.281 + [[ Fedora Linux == FreeBSD ]] 00:01:19.281 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:19.281 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:19.281 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:19.281 + [[ -x /usr/src/fio-static/fio ]] 00:01:19.281 + export FIO_BIN=/usr/src/fio-static/fio 00:01:19.281 + FIO_BIN=/usr/src/fio-static/fio 00:01:19.281 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:19.281 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:19.281 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:19.281 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:19.281 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:19.281 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:19.281 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:19.281 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:19.281 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:19.281 15:09:05 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:01:19.281 15:09:05 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:19.281 15:09:05 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:19.281 15:09:05 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_RUN_ASAN=1 00:01:19.281 15:09:05 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_RUN_UBSAN=1 00:01:19.281 15:09:05 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_RAID=1 00:01:19.281 15:09:05 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:19.281 15:09:05 -- spdk_repo/autorun-spdk.conf@6 -- $ RUN_NIGHTLY=0 00:01:19.281 15:09:05 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:01:19.281 15:09:05 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:19.540 15:09:05 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:01:19.540 15:09:05 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:01:19.540 15:09:05 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:19.540 15:09:05 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:19.540 15:09:05 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:19.540 15:09:05 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:19.540 15:09:05 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:19.541 15:09:05 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:19.541 15:09:05 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:19.541 15:09:05 -- paths/export.sh@5 -- $ export PATH 00:01:19.541 15:09:05 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:19.541 15:09:05 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:01:19.541 15:09:05 -- common/autobuild_common.sh@493 -- $ date +%s 00:01:19.541 15:09:05 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1732115345.XXXXXX 00:01:19.541 15:09:05 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1732115345.n2V6bZ 00:01:19.541 15:09:05 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:01:19.541 15:09:05 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:01:19.541 15:09:05 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:01:19.541 15:09:05 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:01:19.541 15:09:05 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:01:19.541 15:09:05 -- common/autobuild_common.sh@509 -- $ get_config_params 00:01:19.541 15:09:05 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:01:19.541 15:09:05 -- common/autotest_common.sh@10 -- $ set +x 00:01:19.541 15:09:05 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f' 00:01:19.541 15:09:05 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:01:19.541 15:09:05 -- pm/common@17 -- $ local monitor 00:01:19.541 15:09:05 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:19.541 15:09:05 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:19.541 15:09:05 -- pm/common@25 -- $ sleep 1 00:01:19.541 15:09:05 -- pm/common@21 -- $ date +%s 00:01:19.541 15:09:05 -- pm/common@21 -- $ date +%s 00:01:19.541 15:09:05 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732115345 00:01:19.541 15:09:05 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732115345 00:01:19.541 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732115345_collect-cpu-load.pm.log 00:01:19.541 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732115345_collect-vmstat.pm.log 00:01:20.477 15:09:06 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:01:20.477 15:09:06 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:20.477 15:09:06 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:20.477 15:09:06 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:01:20.477 15:09:06 -- spdk/autobuild.sh@16 -- $ date -u 00:01:20.477 Wed Nov 20 03:09:06 PM UTC 2024 00:01:20.477 15:09:06 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:20.477 v25.01-pre-239-g1981e6eec 00:01:20.477 15:09:06 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:01:20.477 15:09:06 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:01:20.477 15:09:06 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:20.477 15:09:06 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:20.477 15:09:06 -- common/autotest_common.sh@10 -- $ set +x 00:01:20.477 ************************************ 00:01:20.477 START TEST asan 00:01:20.477 ************************************ 00:01:20.477 using asan 00:01:20.477 15:09:06 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:01:20.477 00:01:20.477 real 0m0.000s 00:01:20.477 user 0m0.000s 00:01:20.477 sys 0m0.000s 00:01:20.477 15:09:06 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:20.477 15:09:06 asan -- common/autotest_common.sh@10 -- $ set +x 00:01:20.477 ************************************ 00:01:20.477 END TEST asan 00:01:20.477 ************************************ 00:01:20.737 15:09:07 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:20.737 15:09:07 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:20.737 15:09:07 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:20.737 15:09:07 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:20.737 15:09:07 -- common/autotest_common.sh@10 -- $ set +x 00:01:20.737 ************************************ 00:01:20.737 START TEST ubsan 00:01:20.737 ************************************ 00:01:20.737 using ubsan 00:01:20.738 15:09:07 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:01:20.738 00:01:20.738 real 0m0.000s 00:01:20.739 user 0m0.000s 00:01:20.739 sys 0m0.000s 00:01:20.739 15:09:07 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:20.739 ************************************ 00:01:20.739 END TEST ubsan 00:01:20.739 15:09:07 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:20.739 ************************************ 00:01:20.739 15:09:07 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:20.739 15:09:07 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:20.739 15:09:07 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:20.739 15:09:07 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:20.739 15:09:07 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:20.739 15:09:07 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:20.739 15:09:07 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:20.739 15:09:07 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:20.740 15:09:07 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --with-shared 00:01:21.004 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:01:21.004 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:01:21.571 Using 'verbs' RDMA provider 00:01:37.397 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:01:55.501 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:01:55.501 Creating mk/config.mk...done. 00:01:55.501 Creating mk/cc.flags.mk...done. 00:01:55.501 Type 'make' to build. 00:01:55.501 15:09:40 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:01:55.501 15:09:40 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:55.501 15:09:40 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:55.501 15:09:40 -- common/autotest_common.sh@10 -- $ set +x 00:01:55.501 ************************************ 00:01:55.501 START TEST make 00:01:55.501 ************************************ 00:01:55.501 15:09:40 make -- common/autotest_common.sh@1129 -- $ make -j10 00:01:55.501 make[1]: Nothing to be done for 'all'. 00:02:05.607 The Meson build system 00:02:05.607 Version: 1.5.0 00:02:05.607 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:05.607 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:05.607 Build type: native build 00:02:05.607 Program cat found: YES (/usr/bin/cat) 00:02:05.607 Project name: DPDK 00:02:05.607 Project version: 24.03.0 00:02:05.607 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:05.607 C linker for the host machine: cc ld.bfd 2.40-14 00:02:05.607 Host machine cpu family: x86_64 00:02:05.607 Host machine cpu: x86_64 00:02:05.607 Message: ## Building in Developer Mode ## 00:02:05.607 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:05.607 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:05.607 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:05.607 Program python3 found: YES (/usr/bin/python3) 00:02:05.607 Program cat found: YES (/usr/bin/cat) 00:02:05.607 Compiler for C supports arguments -march=native: YES 00:02:05.607 Checking for size of "void *" : 8 00:02:05.607 Checking for size of "void *" : 8 (cached) 00:02:05.607 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:05.607 Library m found: YES 00:02:05.607 Library numa found: YES 00:02:05.607 Has header "numaif.h" : YES 00:02:05.607 Library fdt found: NO 00:02:05.607 Library execinfo found: NO 00:02:05.607 Has header "execinfo.h" : YES 00:02:05.607 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:05.607 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:05.607 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:05.607 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:05.607 Run-time dependency openssl found: YES 3.1.1 00:02:05.607 Run-time dependency libpcap found: YES 1.10.4 00:02:05.607 Has header "pcap.h" with dependency libpcap: YES 00:02:05.608 Compiler for C supports arguments -Wcast-qual: YES 00:02:05.608 Compiler for C supports arguments -Wdeprecated: YES 00:02:05.608 Compiler for C supports arguments -Wformat: YES 00:02:05.608 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:05.608 Compiler for C supports arguments -Wformat-security: NO 00:02:05.608 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:05.608 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:05.608 Compiler for C supports arguments -Wnested-externs: YES 00:02:05.608 Compiler for C supports arguments -Wold-style-definition: YES 00:02:05.608 Compiler for C supports arguments -Wpointer-arith: YES 00:02:05.608 Compiler for C supports arguments -Wsign-compare: YES 00:02:05.608 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:05.608 Compiler for C supports arguments -Wundef: YES 00:02:05.608 Compiler for C supports arguments -Wwrite-strings: YES 00:02:05.608 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:05.608 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:05.608 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:05.608 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:05.608 Program objdump found: YES (/usr/bin/objdump) 00:02:05.608 Compiler for C supports arguments -mavx512f: YES 00:02:05.608 Checking if "AVX512 checking" compiles: YES 00:02:05.608 Fetching value of define "__SSE4_2__" : 1 00:02:05.608 Fetching value of define "__AES__" : 1 00:02:05.608 Fetching value of define "__AVX__" : 1 00:02:05.608 Fetching value of define "__AVX2__" : 1 00:02:05.608 Fetching value of define "__AVX512BW__" : 1 00:02:05.608 Fetching value of define "__AVX512CD__" : 1 00:02:05.608 Fetching value of define "__AVX512DQ__" : 1 00:02:05.608 Fetching value of define "__AVX512F__" : 1 00:02:05.608 Fetching value of define "__AVX512VL__" : 1 00:02:05.608 Fetching value of define "__PCLMUL__" : 1 00:02:05.608 Fetching value of define "__RDRND__" : 1 00:02:05.608 Fetching value of define "__RDSEED__" : 1 00:02:05.608 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:05.608 Fetching value of define "__znver1__" : (undefined) 00:02:05.608 Fetching value of define "__znver2__" : (undefined) 00:02:05.608 Fetching value of define "__znver3__" : (undefined) 00:02:05.608 Fetching value of define "__znver4__" : (undefined) 00:02:05.608 Library asan found: YES 00:02:05.608 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:05.608 Message: lib/log: Defining dependency "log" 00:02:05.608 Message: lib/kvargs: Defining dependency "kvargs" 00:02:05.608 Message: lib/telemetry: Defining dependency "telemetry" 00:02:05.608 Library rt found: YES 00:02:05.608 Checking for function "getentropy" : NO 00:02:05.608 Message: lib/eal: Defining dependency "eal" 00:02:05.608 Message: lib/ring: Defining dependency "ring" 00:02:05.608 Message: lib/rcu: Defining dependency "rcu" 00:02:05.608 Message: lib/mempool: Defining dependency "mempool" 00:02:05.608 Message: lib/mbuf: Defining dependency "mbuf" 00:02:05.608 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:05.608 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:05.608 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:05.608 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:05.608 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:05.608 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:02:05.608 Compiler for C supports arguments -mpclmul: YES 00:02:05.608 Compiler for C supports arguments -maes: YES 00:02:05.608 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:05.608 Compiler for C supports arguments -mavx512bw: YES 00:02:05.608 Compiler for C supports arguments -mavx512dq: YES 00:02:05.608 Compiler for C supports arguments -mavx512vl: YES 00:02:05.608 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:05.608 Compiler for C supports arguments -mavx2: YES 00:02:05.608 Compiler for C supports arguments -mavx: YES 00:02:05.608 Message: lib/net: Defining dependency "net" 00:02:05.608 Message: lib/meter: Defining dependency "meter" 00:02:05.608 Message: lib/ethdev: Defining dependency "ethdev" 00:02:05.608 Message: lib/pci: Defining dependency "pci" 00:02:05.608 Message: lib/cmdline: Defining dependency "cmdline" 00:02:05.608 Message: lib/hash: Defining dependency "hash" 00:02:05.608 Message: lib/timer: Defining dependency "timer" 00:02:05.608 Message: lib/compressdev: Defining dependency "compressdev" 00:02:05.608 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:05.608 Message: lib/dmadev: Defining dependency "dmadev" 00:02:05.608 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:05.608 Message: lib/power: Defining dependency "power" 00:02:05.608 Message: lib/reorder: Defining dependency "reorder" 00:02:05.608 Message: lib/security: Defining dependency "security" 00:02:05.608 Has header "linux/userfaultfd.h" : YES 00:02:05.608 Has header "linux/vduse.h" : YES 00:02:05.608 Message: lib/vhost: Defining dependency "vhost" 00:02:05.608 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:05.608 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:05.608 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:05.608 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:05.608 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:05.608 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:05.608 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:05.608 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:05.608 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:05.608 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:05.608 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:05.608 Configuring doxy-api-html.conf using configuration 00:02:05.608 Configuring doxy-api-man.conf using configuration 00:02:05.608 Program mandb found: YES (/usr/bin/mandb) 00:02:05.608 Program sphinx-build found: NO 00:02:05.608 Configuring rte_build_config.h using configuration 00:02:05.608 Message: 00:02:05.608 ================= 00:02:05.608 Applications Enabled 00:02:05.608 ================= 00:02:05.608 00:02:05.608 apps: 00:02:05.608 00:02:05.608 00:02:05.608 Message: 00:02:05.608 ================= 00:02:05.608 Libraries Enabled 00:02:05.608 ================= 00:02:05.608 00:02:05.608 libs: 00:02:05.608 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:05.608 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:05.608 cryptodev, dmadev, power, reorder, security, vhost, 00:02:05.608 00:02:05.608 Message: 00:02:05.608 =============== 00:02:05.608 Drivers Enabled 00:02:05.608 =============== 00:02:05.608 00:02:05.608 common: 00:02:05.608 00:02:05.608 bus: 00:02:05.608 pci, vdev, 00:02:05.608 mempool: 00:02:05.608 ring, 00:02:05.608 dma: 00:02:05.608 00:02:05.608 net: 00:02:05.608 00:02:05.608 crypto: 00:02:05.608 00:02:05.608 compress: 00:02:05.608 00:02:05.608 vdpa: 00:02:05.608 00:02:05.608 00:02:05.608 Message: 00:02:05.608 ================= 00:02:05.608 Content Skipped 00:02:05.608 ================= 00:02:05.608 00:02:05.608 apps: 00:02:05.608 dumpcap: explicitly disabled via build config 00:02:05.608 graph: explicitly disabled via build config 00:02:05.608 pdump: explicitly disabled via build config 00:02:05.608 proc-info: explicitly disabled via build config 00:02:05.608 test-acl: explicitly disabled via build config 00:02:05.608 test-bbdev: explicitly disabled via build config 00:02:05.608 test-cmdline: explicitly disabled via build config 00:02:05.608 test-compress-perf: explicitly disabled via build config 00:02:05.608 test-crypto-perf: explicitly disabled via build config 00:02:05.608 test-dma-perf: explicitly disabled via build config 00:02:05.608 test-eventdev: explicitly disabled via build config 00:02:05.608 test-fib: explicitly disabled via build config 00:02:05.608 test-flow-perf: explicitly disabled via build config 00:02:05.608 test-gpudev: explicitly disabled via build config 00:02:05.608 test-mldev: explicitly disabled via build config 00:02:05.608 test-pipeline: explicitly disabled via build config 00:02:05.608 test-pmd: explicitly disabled via build config 00:02:05.608 test-regex: explicitly disabled via build config 00:02:05.608 test-sad: explicitly disabled via build config 00:02:05.608 test-security-perf: explicitly disabled via build config 00:02:05.608 00:02:05.608 libs: 00:02:05.608 argparse: explicitly disabled via build config 00:02:05.608 metrics: explicitly disabled via build config 00:02:05.608 acl: explicitly disabled via build config 00:02:05.608 bbdev: explicitly disabled via build config 00:02:05.608 bitratestats: explicitly disabled via build config 00:02:05.608 bpf: explicitly disabled via build config 00:02:05.608 cfgfile: explicitly disabled via build config 00:02:05.608 distributor: explicitly disabled via build config 00:02:05.608 efd: explicitly disabled via build config 00:02:05.608 eventdev: explicitly disabled via build config 00:02:05.608 dispatcher: explicitly disabled via build config 00:02:05.608 gpudev: explicitly disabled via build config 00:02:05.608 gro: explicitly disabled via build config 00:02:05.608 gso: explicitly disabled via build config 00:02:05.608 ip_frag: explicitly disabled via build config 00:02:05.608 jobstats: explicitly disabled via build config 00:02:05.608 latencystats: explicitly disabled via build config 00:02:05.608 lpm: explicitly disabled via build config 00:02:05.608 member: explicitly disabled via build config 00:02:05.608 pcapng: explicitly disabled via build config 00:02:05.608 rawdev: explicitly disabled via build config 00:02:05.608 regexdev: explicitly disabled via build config 00:02:05.608 mldev: explicitly disabled via build config 00:02:05.608 rib: explicitly disabled via build config 00:02:05.608 sched: explicitly disabled via build config 00:02:05.608 stack: explicitly disabled via build config 00:02:05.608 ipsec: explicitly disabled via build config 00:02:05.608 pdcp: explicitly disabled via build config 00:02:05.608 fib: explicitly disabled via build config 00:02:05.608 port: explicitly disabled via build config 00:02:05.608 pdump: explicitly disabled via build config 00:02:05.608 table: explicitly disabled via build config 00:02:05.609 pipeline: explicitly disabled via build config 00:02:05.609 graph: explicitly disabled via build config 00:02:05.609 node: explicitly disabled via build config 00:02:05.609 00:02:05.609 drivers: 00:02:05.609 common/cpt: not in enabled drivers build config 00:02:05.609 common/dpaax: not in enabled drivers build config 00:02:05.609 common/iavf: not in enabled drivers build config 00:02:05.609 common/idpf: not in enabled drivers build config 00:02:05.609 common/ionic: not in enabled drivers build config 00:02:05.609 common/mvep: not in enabled drivers build config 00:02:05.609 common/octeontx: not in enabled drivers build config 00:02:05.609 bus/auxiliary: not in enabled drivers build config 00:02:05.609 bus/cdx: not in enabled drivers build config 00:02:05.609 bus/dpaa: not in enabled drivers build config 00:02:05.609 bus/fslmc: not in enabled drivers build config 00:02:05.609 bus/ifpga: not in enabled drivers build config 00:02:05.609 bus/platform: not in enabled drivers build config 00:02:05.609 bus/uacce: not in enabled drivers build config 00:02:05.609 bus/vmbus: not in enabled drivers build config 00:02:05.609 common/cnxk: not in enabled drivers build config 00:02:05.609 common/mlx5: not in enabled drivers build config 00:02:05.609 common/nfp: not in enabled drivers build config 00:02:05.609 common/nitrox: not in enabled drivers build config 00:02:05.609 common/qat: not in enabled drivers build config 00:02:05.609 common/sfc_efx: not in enabled drivers build config 00:02:05.609 mempool/bucket: not in enabled drivers build config 00:02:05.609 mempool/cnxk: not in enabled drivers build config 00:02:05.609 mempool/dpaa: not in enabled drivers build config 00:02:05.609 mempool/dpaa2: not in enabled drivers build config 00:02:05.609 mempool/octeontx: not in enabled drivers build config 00:02:05.609 mempool/stack: not in enabled drivers build config 00:02:05.609 dma/cnxk: not in enabled drivers build config 00:02:05.609 dma/dpaa: not in enabled drivers build config 00:02:05.609 dma/dpaa2: not in enabled drivers build config 00:02:05.609 dma/hisilicon: not in enabled drivers build config 00:02:05.609 dma/idxd: not in enabled drivers build config 00:02:05.609 dma/ioat: not in enabled drivers build config 00:02:05.609 dma/skeleton: not in enabled drivers build config 00:02:05.609 net/af_packet: not in enabled drivers build config 00:02:05.609 net/af_xdp: not in enabled drivers build config 00:02:05.609 net/ark: not in enabled drivers build config 00:02:05.609 net/atlantic: not in enabled drivers build config 00:02:05.609 net/avp: not in enabled drivers build config 00:02:05.609 net/axgbe: not in enabled drivers build config 00:02:05.609 net/bnx2x: not in enabled drivers build config 00:02:05.609 net/bnxt: not in enabled drivers build config 00:02:05.609 net/bonding: not in enabled drivers build config 00:02:05.609 net/cnxk: not in enabled drivers build config 00:02:05.609 net/cpfl: not in enabled drivers build config 00:02:05.609 net/cxgbe: not in enabled drivers build config 00:02:05.609 net/dpaa: not in enabled drivers build config 00:02:05.609 net/dpaa2: not in enabled drivers build config 00:02:05.609 net/e1000: not in enabled drivers build config 00:02:05.609 net/ena: not in enabled drivers build config 00:02:05.609 net/enetc: not in enabled drivers build config 00:02:05.609 net/enetfec: not in enabled drivers build config 00:02:05.609 net/enic: not in enabled drivers build config 00:02:05.609 net/failsafe: not in enabled drivers build config 00:02:05.609 net/fm10k: not in enabled drivers build config 00:02:05.609 net/gve: not in enabled drivers build config 00:02:05.609 net/hinic: not in enabled drivers build config 00:02:05.609 net/hns3: not in enabled drivers build config 00:02:05.609 net/i40e: not in enabled drivers build config 00:02:05.609 net/iavf: not in enabled drivers build config 00:02:05.609 net/ice: not in enabled drivers build config 00:02:05.609 net/idpf: not in enabled drivers build config 00:02:05.609 net/igc: not in enabled drivers build config 00:02:05.609 net/ionic: not in enabled drivers build config 00:02:05.609 net/ipn3ke: not in enabled drivers build config 00:02:05.609 net/ixgbe: not in enabled drivers build config 00:02:05.609 net/mana: not in enabled drivers build config 00:02:05.609 net/memif: not in enabled drivers build config 00:02:05.609 net/mlx4: not in enabled drivers build config 00:02:05.609 net/mlx5: not in enabled drivers build config 00:02:05.609 net/mvneta: not in enabled drivers build config 00:02:05.609 net/mvpp2: not in enabled drivers build config 00:02:05.609 net/netvsc: not in enabled drivers build config 00:02:05.609 net/nfb: not in enabled drivers build config 00:02:05.609 net/nfp: not in enabled drivers build config 00:02:05.609 net/ngbe: not in enabled drivers build config 00:02:05.609 net/null: not in enabled drivers build config 00:02:05.609 net/octeontx: not in enabled drivers build config 00:02:05.609 net/octeon_ep: not in enabled drivers build config 00:02:05.609 net/pcap: not in enabled drivers build config 00:02:05.609 net/pfe: not in enabled drivers build config 00:02:05.609 net/qede: not in enabled drivers build config 00:02:05.609 net/ring: not in enabled drivers build config 00:02:05.609 net/sfc: not in enabled drivers build config 00:02:05.609 net/softnic: not in enabled drivers build config 00:02:05.609 net/tap: not in enabled drivers build config 00:02:05.609 net/thunderx: not in enabled drivers build config 00:02:05.609 net/txgbe: not in enabled drivers build config 00:02:05.609 net/vdev_netvsc: not in enabled drivers build config 00:02:05.609 net/vhost: not in enabled drivers build config 00:02:05.609 net/virtio: not in enabled drivers build config 00:02:05.609 net/vmxnet3: not in enabled drivers build config 00:02:05.609 raw/*: missing internal dependency, "rawdev" 00:02:05.609 crypto/armv8: not in enabled drivers build config 00:02:05.609 crypto/bcmfs: not in enabled drivers build config 00:02:05.609 crypto/caam_jr: not in enabled drivers build config 00:02:05.609 crypto/ccp: not in enabled drivers build config 00:02:05.609 crypto/cnxk: not in enabled drivers build config 00:02:05.609 crypto/dpaa_sec: not in enabled drivers build config 00:02:05.609 crypto/dpaa2_sec: not in enabled drivers build config 00:02:05.609 crypto/ipsec_mb: not in enabled drivers build config 00:02:05.609 crypto/mlx5: not in enabled drivers build config 00:02:05.609 crypto/mvsam: not in enabled drivers build config 00:02:05.609 crypto/nitrox: not in enabled drivers build config 00:02:05.609 crypto/null: not in enabled drivers build config 00:02:05.609 crypto/octeontx: not in enabled drivers build config 00:02:05.609 crypto/openssl: not in enabled drivers build config 00:02:05.609 crypto/scheduler: not in enabled drivers build config 00:02:05.609 crypto/uadk: not in enabled drivers build config 00:02:05.609 crypto/virtio: not in enabled drivers build config 00:02:05.609 compress/isal: not in enabled drivers build config 00:02:05.609 compress/mlx5: not in enabled drivers build config 00:02:05.609 compress/nitrox: not in enabled drivers build config 00:02:05.609 compress/octeontx: not in enabled drivers build config 00:02:05.609 compress/zlib: not in enabled drivers build config 00:02:05.609 regex/*: missing internal dependency, "regexdev" 00:02:05.609 ml/*: missing internal dependency, "mldev" 00:02:05.609 vdpa/ifc: not in enabled drivers build config 00:02:05.609 vdpa/mlx5: not in enabled drivers build config 00:02:05.609 vdpa/nfp: not in enabled drivers build config 00:02:05.609 vdpa/sfc: not in enabled drivers build config 00:02:05.609 event/*: missing internal dependency, "eventdev" 00:02:05.609 baseband/*: missing internal dependency, "bbdev" 00:02:05.609 gpu/*: missing internal dependency, "gpudev" 00:02:05.609 00:02:05.609 00:02:05.609 Build targets in project: 85 00:02:05.609 00:02:05.609 DPDK 24.03.0 00:02:05.609 00:02:05.609 User defined options 00:02:05.609 buildtype : debug 00:02:05.609 default_library : shared 00:02:05.609 libdir : lib 00:02:05.609 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:05.609 b_sanitize : address 00:02:05.609 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:05.609 c_link_args : 00:02:05.609 cpu_instruction_set: native 00:02:05.609 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:05.609 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:05.609 enable_docs : false 00:02:05.609 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:02:05.609 enable_kmods : false 00:02:05.609 max_lcores : 128 00:02:05.609 tests : false 00:02:05.609 00:02:05.609 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:05.609 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:05.609 [1/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:05.609 [2/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:05.609 [3/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:05.609 [4/268] Linking static target lib/librte_kvargs.a 00:02:05.609 [5/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:05.609 [6/268] Linking static target lib/librte_log.a 00:02:05.609 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:05.609 [8/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.609 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:05.609 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:05.609 [11/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:05.609 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:05.609 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:05.609 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:05.609 [15/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:05.609 [16/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:05.609 [17/268] Linking static target lib/librte_telemetry.a 00:02:05.609 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:05.869 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:06.128 [20/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.128 [21/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:06.128 [22/268] Linking target lib/librte_log.so.24.1 00:02:06.128 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:06.128 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:06.128 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:06.128 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:06.387 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:06.387 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:06.387 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:06.387 [30/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:06.387 [31/268] Linking target lib/librte_kvargs.so.24.1 00:02:06.387 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:06.387 [33/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.644 [34/268] Linking target lib/librte_telemetry.so.24.1 00:02:06.644 [35/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:06.644 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:06.644 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:06.644 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:06.644 [39/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:06.903 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:06.903 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:06.903 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:06.903 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:06.903 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:06.903 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:06.903 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:07.162 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:07.162 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:07.162 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:07.421 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:07.421 [51/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:07.421 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:07.421 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:07.421 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:07.680 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:07.680 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:07.680 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:07.680 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:07.680 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:07.680 [60/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:07.945 [61/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:07.945 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:07.945 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:07.945 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:07.945 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:08.232 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:08.232 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:08.232 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:08.489 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:08.489 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:08.489 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:08.489 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:08.489 [73/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:08.489 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:08.489 [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:08.746 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:08.746 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:08.746 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:08.746 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:09.004 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:09.004 [81/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:09.004 [82/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:09.262 [83/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:09.262 [84/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:09.262 [85/268] Linking static target lib/librte_ring.a 00:02:09.262 [86/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:09.522 [87/268] Linking static target lib/librte_eal.a 00:02:09.522 [88/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:09.522 [89/268] Linking static target lib/librte_rcu.a 00:02:09.522 [90/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:09.522 [91/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:09.522 [92/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:09.522 [93/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:09.522 [94/268] Linking static target lib/librte_mempool.a 00:02:09.781 [95/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:09.781 [96/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.781 [97/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:10.040 [98/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.040 [99/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:10.298 [100/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:10.298 [101/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:10.298 [102/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:10.298 [103/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:10.298 [104/268] Linking static target lib/librte_mbuf.a 00:02:10.298 [105/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:10.298 [106/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:10.298 [107/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:10.557 [108/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:10.557 [109/268] Linking static target lib/librte_meter.a 00:02:10.557 [110/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:10.557 [111/268] Linking static target lib/librte_net.a 00:02:10.814 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:10.814 [113/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.814 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:10.814 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:10.814 [116/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.072 [117/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.072 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:11.331 [119/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.589 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:11.589 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:11.589 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:11.847 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:12.106 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:12.106 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:12.106 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:12.106 [127/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:12.106 [128/268] Linking static target lib/librte_pci.a 00:02:12.106 [129/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:12.364 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:12.364 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:12.364 [132/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:12.364 [133/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:12.364 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:12.623 [135/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:12.623 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:12.623 [137/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:12.623 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:12.623 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:12.623 [140/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.623 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:12.623 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:12.623 [143/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:12.623 [144/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:12.623 [145/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:12.880 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:12.880 [147/268] Linking static target lib/librte_cmdline.a 00:02:12.880 [148/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:12.880 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:13.139 [150/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:13.139 [151/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:13.139 [152/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:13.139 [153/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:13.398 [154/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:13.398 [155/268] Linking static target lib/librte_timer.a 00:02:13.657 [156/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:13.657 [157/268] Linking static target lib/librte_compressdev.a 00:02:13.657 [158/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:13.657 [159/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:13.917 [160/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:13.917 [161/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:14.176 [162/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:14.176 [163/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:14.176 [164/268] Linking static target lib/librte_hash.a 00:02:14.176 [165/268] Linking static target lib/librte_dmadev.a 00:02:14.176 [166/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.176 [167/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:14.176 [168/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:14.176 [169/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:14.176 [170/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:14.176 [171/268] Linking static target lib/librte_ethdev.a 00:02:14.434 [172/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:14.434 [173/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.692 [174/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.693 [175/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:14.693 [176/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:14.693 [177/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:14.693 [178/268] Linking static target lib/librte_cryptodev.a 00:02:14.693 [179/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:14.952 [180/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:14.952 [181/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:14.952 [182/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.952 [183/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:15.211 [184/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.469 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:15.469 [186/268] Linking static target lib/librte_power.a 00:02:15.469 [187/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:15.469 [188/268] Linking static target lib/librte_reorder.a 00:02:15.469 [189/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:15.469 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:15.727 [191/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:15.727 [192/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:15.727 [193/268] Linking static target lib/librte_security.a 00:02:15.985 [194/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.243 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:16.502 [196/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:16.502 [197/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:16.502 [198/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.502 [199/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.761 [200/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:17.029 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:17.029 [202/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:17.029 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:17.029 [204/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:17.292 [205/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:17.292 [206/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.292 [207/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:17.292 [208/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:17.292 [209/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:17.593 [210/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:17.593 [211/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:17.593 [212/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:17.593 [213/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:17.593 [214/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:17.593 [215/268] Linking static target drivers/librte_bus_vdev.a 00:02:17.853 [216/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:17.853 [217/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:17.853 [218/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:17.853 [219/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:17.853 [220/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:17.853 [221/268] Linking static target drivers/librte_bus_pci.a 00:02:17.853 [222/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:17.853 [223/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:17.853 [224/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:18.111 [225/268] Linking static target drivers/librte_mempool_ring.a 00:02:18.111 [226/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.370 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.938 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:22.228 [229/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.228 [230/268] Linking target lib/librte_eal.so.24.1 00:02:22.228 [231/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:22.228 [232/268] Linking target lib/librte_ring.so.24.1 00:02:22.228 [233/268] Linking target lib/librte_meter.so.24.1 00:02:22.228 [234/268] Linking target lib/librte_timer.so.24.1 00:02:22.228 [235/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:22.228 [236/268] Linking target lib/librte_pci.so.24.1 00:02:22.228 [237/268] Linking target lib/librte_dmadev.so.24.1 00:02:22.487 [238/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:22.488 [239/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:22.488 [240/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:22.488 [241/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:22.488 [242/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:22.488 [243/268] Linking target lib/librte_rcu.so.24.1 00:02:22.488 [244/268] Linking target lib/librte_mempool.so.24.1 00:02:22.488 [245/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:22.488 [246/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:22.488 [247/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:22.747 [248/268] Linking target lib/librte_mbuf.so.24.1 00:02:22.747 [249/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:22.747 [250/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:22.747 [251/268] Linking target lib/librte_reorder.so.24.1 00:02:22.747 [252/268] Linking target lib/librte_compressdev.so.24.1 00:02:22.747 [253/268] Linking target lib/librte_cryptodev.so.24.1 00:02:22.747 [254/268] Linking target lib/librte_net.so.24.1 00:02:23.005 [255/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:23.005 [256/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:23.005 [257/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:23.005 [258/268] Linking static target lib/librte_vhost.a 00:02:23.005 [259/268] Linking target lib/librte_hash.so.24.1 00:02:23.005 [260/268] Linking target lib/librte_security.so.24.1 00:02:23.005 [261/268] Linking target lib/librte_cmdline.so.24.1 00:02:23.264 [262/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:23.264 [263/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.523 [264/268] Linking target lib/librte_ethdev.so.24.1 00:02:23.523 [265/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:23.523 [266/268] Linking target lib/librte_power.so.24.1 00:02:25.426 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.426 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:25.426 INFO: autodetecting backend as ninja 00:02:25.426 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:02:43.537 CC lib/log/log.o 00:02:43.537 CC lib/log/log_deprecated.o 00:02:43.537 CC lib/log/log_flags.o 00:02:43.537 CC lib/ut_mock/mock.o 00:02:43.537 CC lib/ut/ut.o 00:02:43.537 LIB libspdk_log.a 00:02:43.537 LIB libspdk_ut.a 00:02:43.537 SO libspdk_ut.so.2.0 00:02:43.537 SO libspdk_log.so.7.1 00:02:43.537 LIB libspdk_ut_mock.a 00:02:43.537 SO libspdk_ut_mock.so.6.0 00:02:43.537 SYMLINK libspdk_ut.so 00:02:43.537 SYMLINK libspdk_log.so 00:02:43.537 SYMLINK libspdk_ut_mock.so 00:02:43.537 CC lib/util/base64.o 00:02:43.537 CC lib/util/bit_array.o 00:02:43.537 CC lib/util/cpuset.o 00:02:43.537 CC lib/util/crc32.o 00:02:43.537 CC lib/util/crc16.o 00:02:43.537 CC lib/util/crc32c.o 00:02:43.537 CC lib/dma/dma.o 00:02:43.537 CC lib/ioat/ioat.o 00:02:43.537 CXX lib/trace_parser/trace.o 00:02:43.537 CC lib/vfio_user/host/vfio_user_pci.o 00:02:43.537 CC lib/util/crc32_ieee.o 00:02:43.537 CC lib/vfio_user/host/vfio_user.o 00:02:43.537 CC lib/util/crc64.o 00:02:43.537 CC lib/util/dif.o 00:02:43.537 CC lib/util/fd.o 00:02:43.537 CC lib/util/fd_group.o 00:02:43.537 LIB libspdk_dma.a 00:02:43.537 SO libspdk_dma.so.5.0 00:02:43.537 CC lib/util/file.o 00:02:43.537 CC lib/util/hexlify.o 00:02:43.537 SYMLINK libspdk_dma.so 00:02:43.537 CC lib/util/iov.o 00:02:43.537 LIB libspdk_ioat.a 00:02:43.537 CC lib/util/math.o 00:02:43.537 CC lib/util/net.o 00:02:43.537 SO libspdk_ioat.so.7.0 00:02:43.537 LIB libspdk_vfio_user.a 00:02:43.537 CC lib/util/pipe.o 00:02:43.537 CC lib/util/strerror_tls.o 00:02:43.537 SYMLINK libspdk_ioat.so 00:02:43.537 CC lib/util/string.o 00:02:43.537 SO libspdk_vfio_user.so.5.0 00:02:43.537 CC lib/util/uuid.o 00:02:43.537 CC lib/util/xor.o 00:02:43.537 CC lib/util/zipf.o 00:02:43.537 SYMLINK libspdk_vfio_user.so 00:02:43.537 CC lib/util/md5.o 00:02:43.537 LIB libspdk_util.a 00:02:43.537 LIB libspdk_trace_parser.a 00:02:43.537 SO libspdk_util.so.10.1 00:02:43.537 SO libspdk_trace_parser.so.6.0 00:02:43.537 SYMLINK libspdk_util.so 00:02:43.537 SYMLINK libspdk_trace_parser.so 00:02:43.797 CC lib/rdma_utils/rdma_utils.o 00:02:43.797 CC lib/env_dpdk/env.o 00:02:43.797 CC lib/env_dpdk/pci.o 00:02:43.797 CC lib/env_dpdk/memory.o 00:02:43.797 CC lib/env_dpdk/threads.o 00:02:43.797 CC lib/env_dpdk/init.o 00:02:43.797 CC lib/json/json_parse.o 00:02:43.797 CC lib/idxd/idxd.o 00:02:43.797 CC lib/conf/conf.o 00:02:43.797 CC lib/vmd/vmd.o 00:02:44.056 CC lib/env_dpdk/pci_ioat.o 00:02:44.056 LIB libspdk_rdma_utils.a 00:02:44.056 CC lib/json/json_util.o 00:02:44.056 LIB libspdk_conf.a 00:02:44.056 CC lib/json/json_write.o 00:02:44.056 SO libspdk_rdma_utils.so.1.0 00:02:44.056 SO libspdk_conf.so.6.0 00:02:44.056 SYMLINK libspdk_rdma_utils.so 00:02:44.056 CC lib/vmd/led.o 00:02:44.056 CC lib/env_dpdk/pci_virtio.o 00:02:44.056 SYMLINK libspdk_conf.so 00:02:44.315 CC lib/env_dpdk/pci_vmd.o 00:02:44.315 CC lib/env_dpdk/pci_idxd.o 00:02:44.315 CC lib/idxd/idxd_user.o 00:02:44.315 CC lib/env_dpdk/pci_event.o 00:02:44.315 CC lib/idxd/idxd_kernel.o 00:02:44.315 CC lib/rdma_provider/common.o 00:02:44.315 LIB libspdk_json.a 00:02:44.315 CC lib/env_dpdk/sigbus_handler.o 00:02:44.574 SO libspdk_json.so.6.0 00:02:44.574 CC lib/env_dpdk/pci_dpdk.o 00:02:44.574 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:44.574 SYMLINK libspdk_json.so 00:02:44.574 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:44.574 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:44.574 LIB libspdk_vmd.a 00:02:44.574 LIB libspdk_idxd.a 00:02:44.574 SO libspdk_vmd.so.6.0 00:02:44.832 CC lib/jsonrpc/jsonrpc_server.o 00:02:44.832 CC lib/jsonrpc/jsonrpc_client.o 00:02:44.832 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:44.832 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:44.832 LIB libspdk_rdma_provider.a 00:02:44.832 SO libspdk_idxd.so.12.1 00:02:44.832 SYMLINK libspdk_vmd.so 00:02:44.832 SO libspdk_rdma_provider.so.7.0 00:02:44.832 SYMLINK libspdk_idxd.so 00:02:44.832 SYMLINK libspdk_rdma_provider.so 00:02:45.090 LIB libspdk_jsonrpc.a 00:02:45.090 SO libspdk_jsonrpc.so.6.0 00:02:45.350 SYMLINK libspdk_jsonrpc.so 00:02:45.610 LIB libspdk_env_dpdk.a 00:02:45.610 CC lib/rpc/rpc.o 00:02:45.610 SO libspdk_env_dpdk.so.15.1 00:02:45.870 SYMLINK libspdk_env_dpdk.so 00:02:45.870 LIB libspdk_rpc.a 00:02:45.870 SO libspdk_rpc.so.6.0 00:02:45.870 SYMLINK libspdk_rpc.so 00:02:46.437 CC lib/keyring/keyring.o 00:02:46.437 CC lib/keyring/keyring_rpc.o 00:02:46.437 CC lib/notify/notify.o 00:02:46.437 CC lib/notify/notify_rpc.o 00:02:46.437 CC lib/trace/trace.o 00:02:46.437 CC lib/trace/trace_flags.o 00:02:46.437 CC lib/trace/trace_rpc.o 00:02:46.437 LIB libspdk_notify.a 00:02:46.437 LIB libspdk_keyring.a 00:02:46.437 SO libspdk_notify.so.6.0 00:02:46.696 LIB libspdk_trace.a 00:02:46.696 SO libspdk_keyring.so.2.0 00:02:46.696 SYMLINK libspdk_notify.so 00:02:46.696 SO libspdk_trace.so.11.0 00:02:46.696 SYMLINK libspdk_keyring.so 00:02:46.696 SYMLINK libspdk_trace.so 00:02:46.955 CC lib/sock/sock.o 00:02:46.955 CC lib/sock/sock_rpc.o 00:02:46.955 CC lib/thread/thread.o 00:02:46.955 CC lib/thread/iobuf.o 00:02:47.523 LIB libspdk_sock.a 00:02:47.523 SO libspdk_sock.so.10.0 00:02:47.782 SYMLINK libspdk_sock.so 00:02:48.072 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:48.072 CC lib/nvme/nvme_ctrlr.o 00:02:48.072 CC lib/nvme/nvme_fabric.o 00:02:48.072 CC lib/nvme/nvme_pcie.o 00:02:48.072 CC lib/nvme/nvme_qpair.o 00:02:48.072 CC lib/nvme/nvme_pcie_common.o 00:02:48.072 CC lib/nvme/nvme.o 00:02:48.072 CC lib/nvme/nvme_ns_cmd.o 00:02:48.072 CC lib/nvme/nvme_ns.o 00:02:49.008 CC lib/nvme/nvme_quirks.o 00:02:49.008 CC lib/nvme/nvme_transport.o 00:02:49.008 CC lib/nvme/nvme_discovery.o 00:02:49.008 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:49.008 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:49.311 LIB libspdk_thread.a 00:02:49.311 CC lib/nvme/nvme_tcp.o 00:02:49.311 SO libspdk_thread.so.11.0 00:02:49.311 CC lib/nvme/nvme_opal.o 00:02:49.311 CC lib/nvme/nvme_io_msg.o 00:02:49.311 SYMLINK libspdk_thread.so 00:02:49.311 CC lib/nvme/nvme_poll_group.o 00:02:49.570 CC lib/nvme/nvme_zns.o 00:02:49.570 CC lib/nvme/nvme_stubs.o 00:02:49.829 CC lib/nvme/nvme_auth.o 00:02:49.829 CC lib/accel/accel.o 00:02:49.829 CC lib/accel/accel_rpc.o 00:02:49.829 CC lib/nvme/nvme_cuse.o 00:02:50.087 CC lib/nvme/nvme_rdma.o 00:02:50.087 CC lib/accel/accel_sw.o 00:02:50.346 CC lib/blob/blobstore.o 00:02:50.346 CC lib/init/json_config.o 00:02:50.346 CC lib/virtio/virtio.o 00:02:50.604 CC lib/fsdev/fsdev.o 00:02:50.604 CC lib/init/subsystem.o 00:02:50.862 CC lib/fsdev/fsdev_io.o 00:02:50.862 CC lib/init/subsystem_rpc.o 00:02:50.862 CC lib/fsdev/fsdev_rpc.o 00:02:51.123 CC lib/virtio/virtio_vhost_user.o 00:02:51.123 CC lib/init/rpc.o 00:02:51.123 CC lib/virtio/virtio_vfio_user.o 00:02:51.123 CC lib/virtio/virtio_pci.o 00:02:51.123 CC lib/blob/request.o 00:02:51.123 CC lib/blob/zeroes.o 00:02:51.123 LIB libspdk_init.a 00:02:51.381 SO libspdk_init.so.6.0 00:02:51.381 CC lib/blob/blob_bs_dev.o 00:02:51.381 SYMLINK libspdk_init.so 00:02:51.381 LIB libspdk_fsdev.a 00:02:51.381 SO libspdk_fsdev.so.2.0 00:02:51.381 LIB libspdk_accel.a 00:02:51.381 SYMLINK libspdk_fsdev.so 00:02:51.381 SO libspdk_accel.so.16.0 00:02:51.640 LIB libspdk_virtio.a 00:02:51.640 CC lib/event/app.o 00:02:51.640 CC lib/event/log_rpc.o 00:02:51.640 CC lib/event/reactor.o 00:02:51.640 SYMLINK libspdk_accel.so 00:02:51.640 CC lib/event/app_rpc.o 00:02:51.640 SO libspdk_virtio.so.7.0 00:02:51.640 CC lib/event/scheduler_static.o 00:02:51.640 SYMLINK libspdk_virtio.so 00:02:51.640 LIB libspdk_nvme.a 00:02:51.640 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:02:51.898 CC lib/bdev/bdev.o 00:02:51.898 CC lib/bdev/part.o 00:02:51.899 CC lib/bdev/bdev_zone.o 00:02:51.899 CC lib/bdev/bdev_rpc.o 00:02:51.899 CC lib/bdev/scsi_nvme.o 00:02:52.158 SO libspdk_nvme.so.15.0 00:02:52.158 LIB libspdk_event.a 00:02:52.158 SO libspdk_event.so.14.0 00:02:52.416 SYMLINK libspdk_event.so 00:02:52.416 SYMLINK libspdk_nvme.so 00:02:52.416 LIB libspdk_fuse_dispatcher.a 00:02:52.416 SO libspdk_fuse_dispatcher.so.1.0 00:02:52.674 SYMLINK libspdk_fuse_dispatcher.so 00:02:54.577 LIB libspdk_blob.a 00:02:54.577 SO libspdk_blob.so.11.0 00:02:54.836 SYMLINK libspdk_blob.so 00:02:55.094 CC lib/lvol/lvol.o 00:02:55.094 CC lib/blobfs/blobfs.o 00:02:55.094 CC lib/blobfs/tree.o 00:02:55.411 LIB libspdk_bdev.a 00:02:55.411 SO libspdk_bdev.so.17.0 00:02:55.670 SYMLINK libspdk_bdev.so 00:02:55.929 CC lib/ublk/ublk.o 00:02:55.929 CC lib/ublk/ublk_rpc.o 00:02:55.929 CC lib/ftl/ftl_core.o 00:02:55.929 CC lib/ftl/ftl_init.o 00:02:55.929 CC lib/ftl/ftl_layout.o 00:02:55.929 CC lib/nvmf/ctrlr.o 00:02:55.929 CC lib/scsi/dev.o 00:02:55.929 CC lib/nbd/nbd.o 00:02:55.929 CC lib/scsi/lun.o 00:02:56.188 CC lib/scsi/port.o 00:02:56.188 LIB libspdk_blobfs.a 00:02:56.188 CC lib/scsi/scsi.o 00:02:56.188 CC lib/ftl/ftl_debug.o 00:02:56.188 SO libspdk_blobfs.so.10.0 00:02:56.188 LIB libspdk_lvol.a 00:02:56.188 CC lib/scsi/scsi_bdev.o 00:02:56.188 SO libspdk_lvol.so.10.0 00:02:56.188 SYMLINK libspdk_blobfs.so 00:02:56.188 CC lib/scsi/scsi_pr.o 00:02:56.446 CC lib/nbd/nbd_rpc.o 00:02:56.446 CC lib/scsi/scsi_rpc.o 00:02:56.446 CC lib/ftl/ftl_io.o 00:02:56.446 SYMLINK libspdk_lvol.so 00:02:56.446 CC lib/scsi/task.o 00:02:56.446 CC lib/ftl/ftl_sb.o 00:02:56.446 CC lib/ftl/ftl_l2p.o 00:02:56.446 CC lib/ftl/ftl_l2p_flat.o 00:02:56.446 LIB libspdk_nbd.a 00:02:56.704 SO libspdk_nbd.so.7.0 00:02:56.704 CC lib/ftl/ftl_nv_cache.o 00:02:56.704 SYMLINK libspdk_nbd.so 00:02:56.704 CC lib/nvmf/ctrlr_discovery.o 00:02:56.704 CC lib/nvmf/ctrlr_bdev.o 00:02:56.704 CC lib/ftl/ftl_band.o 00:02:56.704 CC lib/ftl/ftl_band_ops.o 00:02:56.704 CC lib/ftl/ftl_writer.o 00:02:56.704 LIB libspdk_ublk.a 00:02:56.704 CC lib/ftl/ftl_rq.o 00:02:56.704 SO libspdk_ublk.so.3.0 00:02:56.962 SYMLINK libspdk_ublk.so 00:02:56.962 CC lib/ftl/ftl_reloc.o 00:02:56.962 LIB libspdk_scsi.a 00:02:56.962 SO libspdk_scsi.so.9.0 00:02:57.219 CC lib/ftl/ftl_l2p_cache.o 00:02:57.219 CC lib/ftl/ftl_p2l.o 00:02:57.219 SYMLINK libspdk_scsi.so 00:02:57.219 CC lib/ftl/ftl_p2l_log.o 00:02:57.477 CC lib/nvmf/subsystem.o 00:02:57.477 CC lib/ftl/mngt/ftl_mngt.o 00:02:57.477 CC lib/iscsi/conn.o 00:02:57.733 CC lib/vhost/vhost.o 00:02:57.733 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:57.733 CC lib/nvmf/nvmf.o 00:02:57.991 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:57.991 CC lib/iscsi/init_grp.o 00:02:57.991 CC lib/iscsi/iscsi.o 00:02:57.991 CC lib/iscsi/param.o 00:02:57.991 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:58.249 CC lib/iscsi/portal_grp.o 00:02:58.249 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:58.506 CC lib/iscsi/tgt_node.o 00:02:58.506 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:58.764 CC lib/iscsi/iscsi_subsystem.o 00:02:58.764 CC lib/vhost/vhost_rpc.o 00:02:58.764 CC lib/nvmf/nvmf_rpc.o 00:02:58.764 CC lib/iscsi/iscsi_rpc.o 00:02:58.764 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:59.021 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:59.021 CC lib/iscsi/task.o 00:02:59.021 CC lib/nvmf/transport.o 00:02:59.279 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:59.279 CC lib/vhost/vhost_scsi.o 00:02:59.279 CC lib/vhost/vhost_blk.o 00:02:59.279 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:59.537 CC lib/nvmf/tcp.o 00:02:59.794 CC lib/nvmf/stubs.o 00:02:59.794 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:59.794 CC lib/vhost/rte_vhost_user.o 00:02:59.794 CC lib/nvmf/mdns_server.o 00:03:00.051 CC lib/nvmf/rdma.o 00:03:00.051 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:00.309 CC lib/nvmf/auth.o 00:03:00.309 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:00.567 LIB libspdk_iscsi.a 00:03:00.567 CC lib/ftl/utils/ftl_md.o 00:03:00.567 CC lib/ftl/utils/ftl_conf.o 00:03:00.567 CC lib/ftl/utils/ftl_mempool.o 00:03:00.567 SO libspdk_iscsi.so.8.0 00:03:00.825 CC lib/ftl/utils/ftl_bitmap.o 00:03:00.825 CC lib/ftl/utils/ftl_property.o 00:03:00.825 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:00.825 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:00.825 SYMLINK libspdk_iscsi.so 00:03:00.825 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:00.825 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:01.111 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:01.111 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:01.111 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:01.111 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:01.409 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:01.409 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:01.409 LIB libspdk_vhost.a 00:03:01.409 SO libspdk_vhost.so.8.0 00:03:01.409 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:01.409 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:03:01.409 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:03:01.409 CC lib/ftl/base/ftl_base_dev.o 00:03:01.409 CC lib/ftl/base/ftl_base_bdev.o 00:03:01.667 SYMLINK libspdk_vhost.so 00:03:01.667 CC lib/ftl/ftl_trace.o 00:03:01.925 LIB libspdk_ftl.a 00:03:02.184 SO libspdk_ftl.so.9.0 00:03:02.443 SYMLINK libspdk_ftl.so 00:03:03.009 LIB libspdk_nvmf.a 00:03:03.267 SO libspdk_nvmf.so.20.0 00:03:03.526 SYMLINK libspdk_nvmf.so 00:03:04.093 CC module/env_dpdk/env_dpdk_rpc.o 00:03:04.093 CC module/sock/posix/posix.o 00:03:04.093 CC module/keyring/file/keyring.o 00:03:04.093 CC module/keyring/linux/keyring.o 00:03:04.093 CC module/scheduler/gscheduler/gscheduler.o 00:03:04.093 CC module/accel/error/accel_error.o 00:03:04.093 CC module/blob/bdev/blob_bdev.o 00:03:04.093 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:04.093 CC module/fsdev/aio/fsdev_aio.o 00:03:04.093 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:04.352 LIB libspdk_env_dpdk_rpc.a 00:03:04.352 SO libspdk_env_dpdk_rpc.so.6.0 00:03:04.352 CC module/keyring/linux/keyring_rpc.o 00:03:04.352 CC module/keyring/file/keyring_rpc.o 00:03:04.352 LIB libspdk_scheduler_gscheduler.a 00:03:04.610 SO libspdk_scheduler_gscheduler.so.4.0 00:03:04.610 SYMLINK libspdk_env_dpdk_rpc.so 00:03:04.610 LIB libspdk_scheduler_dpdk_governor.a 00:03:04.610 CC module/accel/error/accel_error_rpc.o 00:03:04.610 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:04.610 LIB libspdk_scheduler_dynamic.a 00:03:04.610 SYMLINK libspdk_scheduler_gscheduler.so 00:03:04.610 CC module/fsdev/aio/fsdev_aio_rpc.o 00:03:04.610 SO libspdk_scheduler_dynamic.so.4.0 00:03:04.610 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:04.610 LIB libspdk_keyring_linux.a 00:03:04.610 SYMLINK libspdk_scheduler_dynamic.so 00:03:04.610 LIB libspdk_keyring_file.a 00:03:04.868 SO libspdk_keyring_linux.so.1.0 00:03:04.868 LIB libspdk_blob_bdev.a 00:03:04.868 SO libspdk_keyring_file.so.2.0 00:03:04.868 CC module/accel/ioat/accel_ioat.o 00:03:04.868 SO libspdk_blob_bdev.so.11.0 00:03:04.868 LIB libspdk_accel_error.a 00:03:04.868 SO libspdk_accel_error.so.2.0 00:03:04.868 SYMLINK libspdk_keyring_linux.so 00:03:04.868 SYMLINK libspdk_keyring_file.so 00:03:04.868 CC module/accel/ioat/accel_ioat_rpc.o 00:03:04.868 CC module/fsdev/aio/linux_aio_mgr.o 00:03:04.868 SYMLINK libspdk_blob_bdev.so 00:03:05.126 CC module/accel/iaa/accel_iaa.o 00:03:05.126 CC module/accel/dsa/accel_dsa.o 00:03:05.126 SYMLINK libspdk_accel_error.so 00:03:05.126 CC module/accel/iaa/accel_iaa_rpc.o 00:03:05.126 LIB libspdk_accel_ioat.a 00:03:05.126 SO libspdk_accel_ioat.so.6.0 00:03:05.385 CC module/accel/dsa/accel_dsa_rpc.o 00:03:05.385 CC module/bdev/delay/vbdev_delay.o 00:03:05.385 SYMLINK libspdk_accel_ioat.so 00:03:05.385 CC module/blobfs/bdev/blobfs_bdev.o 00:03:05.385 LIB libspdk_accel_iaa.a 00:03:05.385 CC module/bdev/error/vbdev_error.o 00:03:05.385 SO libspdk_accel_iaa.so.3.0 00:03:05.644 CC module/bdev/gpt/gpt.o 00:03:05.644 CC module/bdev/error/vbdev_error_rpc.o 00:03:05.645 LIB libspdk_accel_dsa.a 00:03:05.645 CC module/bdev/lvol/vbdev_lvol.o 00:03:05.645 SYMLINK libspdk_accel_iaa.so 00:03:05.645 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:05.645 LIB libspdk_fsdev_aio.a 00:03:05.645 SO libspdk_accel_dsa.so.5.0 00:03:05.645 LIB libspdk_sock_posix.a 00:03:05.645 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:05.645 SO libspdk_fsdev_aio.so.1.0 00:03:05.903 SO libspdk_sock_posix.so.6.0 00:03:05.903 SYMLINK libspdk_accel_dsa.so 00:03:05.903 SYMLINK libspdk_fsdev_aio.so 00:03:05.903 SYMLINK libspdk_sock_posix.so 00:03:06.231 CC module/bdev/gpt/vbdev_gpt.o 00:03:06.231 LIB libspdk_bdev_error.a 00:03:06.231 LIB libspdk_blobfs_bdev.a 00:03:06.231 SO libspdk_bdev_error.so.6.0 00:03:06.231 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:06.231 CC module/bdev/malloc/bdev_malloc.o 00:03:06.231 SO libspdk_blobfs_bdev.so.6.0 00:03:06.231 CC module/bdev/null/bdev_null.o 00:03:06.231 SYMLINK libspdk_bdev_error.so 00:03:06.231 CC module/bdev/passthru/vbdev_passthru.o 00:03:06.231 CC module/bdev/nvme/bdev_nvme.o 00:03:06.231 SYMLINK libspdk_blobfs_bdev.so 00:03:06.490 LIB libspdk_bdev_delay.a 00:03:06.490 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:06.490 SO libspdk_bdev_delay.so.6.0 00:03:06.490 LIB libspdk_bdev_gpt.a 00:03:06.490 SO libspdk_bdev_gpt.so.6.0 00:03:06.490 CC module/bdev/raid/bdev_raid.o 00:03:06.490 CC module/bdev/split/vbdev_split.o 00:03:06.490 SYMLINK libspdk_bdev_delay.so 00:03:06.749 LIB libspdk_bdev_lvol.a 00:03:06.749 SYMLINK libspdk_bdev_gpt.so 00:03:06.749 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:06.749 SO libspdk_bdev_lvol.so.6.0 00:03:06.749 CC module/bdev/null/bdev_null_rpc.o 00:03:06.749 SYMLINK libspdk_bdev_lvol.so 00:03:06.749 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:06.749 CC module/bdev/split/vbdev_split_rpc.o 00:03:07.007 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:07.007 LIB libspdk_bdev_malloc.a 00:03:07.007 SO libspdk_bdev_malloc.so.6.0 00:03:07.007 CC module/bdev/nvme/nvme_rpc.o 00:03:07.007 SYMLINK libspdk_bdev_malloc.so 00:03:07.007 CC module/bdev/nvme/bdev_mdns_client.o 00:03:07.007 LIB libspdk_bdev_null.a 00:03:07.266 LIB libspdk_bdev_split.a 00:03:07.266 LIB libspdk_bdev_passthru.a 00:03:07.266 SO libspdk_bdev_null.so.6.0 00:03:07.266 SO libspdk_bdev_split.so.6.0 00:03:07.266 SO libspdk_bdev_passthru.so.6.0 00:03:07.266 CC module/bdev/aio/bdev_aio.o 00:03:07.266 SYMLINK libspdk_bdev_passthru.so 00:03:07.266 SYMLINK libspdk_bdev_null.so 00:03:07.266 CC module/bdev/aio/bdev_aio_rpc.o 00:03:07.266 CC module/bdev/raid/bdev_raid_rpc.o 00:03:07.266 SYMLINK libspdk_bdev_split.so 00:03:07.266 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:07.526 CC module/bdev/nvme/vbdev_opal.o 00:03:07.526 CC module/bdev/ftl/bdev_ftl.o 00:03:07.785 LIB libspdk_bdev_aio.a 00:03:07.785 CC module/bdev/iscsi/bdev_iscsi.o 00:03:07.785 LIB libspdk_bdev_zone_block.a 00:03:07.785 SO libspdk_bdev_aio.so.6.0 00:03:07.785 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:07.785 SO libspdk_bdev_zone_block.so.6.0 00:03:07.785 CC module/bdev/raid/bdev_raid_sb.o 00:03:07.785 SYMLINK libspdk_bdev_aio.so 00:03:07.785 CC module/bdev/raid/raid0.o 00:03:07.786 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:07.786 SYMLINK libspdk_bdev_zone_block.so 00:03:07.786 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:08.044 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:08.044 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:08.044 CC module/bdev/raid/raid1.o 00:03:08.044 CC module/bdev/raid/concat.o 00:03:08.044 LIB libspdk_bdev_ftl.a 00:03:08.303 SO libspdk_bdev_ftl.so.6.0 00:03:08.303 CC module/bdev/raid/raid5f.o 00:03:08.303 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:08.303 LIB libspdk_bdev_iscsi.a 00:03:08.303 SYMLINK libspdk_bdev_ftl.so 00:03:08.303 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:08.303 SO libspdk_bdev_iscsi.so.6.0 00:03:08.561 SYMLINK libspdk_bdev_iscsi.so 00:03:08.819 LIB libspdk_bdev_virtio.a 00:03:08.819 SO libspdk_bdev_virtio.so.6.0 00:03:08.819 SYMLINK libspdk_bdev_virtio.so 00:03:09.078 LIB libspdk_bdev_raid.a 00:03:09.336 SO libspdk_bdev_raid.so.6.0 00:03:09.337 SYMLINK libspdk_bdev_raid.so 00:03:10.715 LIB libspdk_bdev_nvme.a 00:03:10.974 SO libspdk_bdev_nvme.so.7.1 00:03:10.974 SYMLINK libspdk_bdev_nvme.so 00:03:11.543 CC module/event/subsystems/iobuf/iobuf.o 00:03:11.543 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:11.543 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:11.543 CC module/event/subsystems/scheduler/scheduler.o 00:03:11.543 CC module/event/subsystems/keyring/keyring.o 00:03:11.543 CC module/event/subsystems/vmd/vmd.o 00:03:11.543 CC module/event/subsystems/sock/sock.o 00:03:11.543 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:11.543 CC module/event/subsystems/fsdev/fsdev.o 00:03:11.829 LIB libspdk_event_vhost_blk.a 00:03:11.829 SO libspdk_event_vhost_blk.so.3.0 00:03:11.829 LIB libspdk_event_keyring.a 00:03:11.829 LIB libspdk_event_sock.a 00:03:11.829 LIB libspdk_event_scheduler.a 00:03:11.829 LIB libspdk_event_fsdev.a 00:03:11.829 LIB libspdk_event_vmd.a 00:03:11.829 SO libspdk_event_keyring.so.1.0 00:03:11.829 SO libspdk_event_sock.so.5.0 00:03:11.829 SO libspdk_event_scheduler.so.4.0 00:03:11.829 SYMLINK libspdk_event_vhost_blk.so 00:03:11.829 LIB libspdk_event_iobuf.a 00:03:11.829 SO libspdk_event_fsdev.so.1.0 00:03:11.829 SO libspdk_event_vmd.so.6.0 00:03:11.829 SO libspdk_event_iobuf.so.3.0 00:03:11.829 SYMLINK libspdk_event_keyring.so 00:03:11.829 SYMLINK libspdk_event_sock.so 00:03:11.829 SYMLINK libspdk_event_fsdev.so 00:03:11.829 SYMLINK libspdk_event_scheduler.so 00:03:12.086 SYMLINK libspdk_event_vmd.so 00:03:12.086 SYMLINK libspdk_event_iobuf.so 00:03:12.344 CC module/event/subsystems/accel/accel.o 00:03:12.344 LIB libspdk_event_accel.a 00:03:12.344 SO libspdk_event_accel.so.6.0 00:03:12.603 SYMLINK libspdk_event_accel.so 00:03:12.861 CC module/event/subsystems/bdev/bdev.o 00:03:13.120 LIB libspdk_event_bdev.a 00:03:13.120 SO libspdk_event_bdev.so.6.0 00:03:13.120 SYMLINK libspdk_event_bdev.so 00:03:13.380 CC module/event/subsystems/ublk/ublk.o 00:03:13.380 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:13.380 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:13.380 CC module/event/subsystems/scsi/scsi.o 00:03:13.380 CC module/event/subsystems/nbd/nbd.o 00:03:13.638 LIB libspdk_event_nbd.a 00:03:13.638 LIB libspdk_event_scsi.a 00:03:13.638 SO libspdk_event_nbd.so.6.0 00:03:13.638 SO libspdk_event_scsi.so.6.0 00:03:13.638 LIB libspdk_event_ublk.a 00:03:13.638 SO libspdk_event_ublk.so.3.0 00:03:13.638 SYMLINK libspdk_event_nbd.so 00:03:13.638 SYMLINK libspdk_event_scsi.so 00:03:13.898 SYMLINK libspdk_event_ublk.so 00:03:13.898 LIB libspdk_event_nvmf.a 00:03:13.898 SO libspdk_event_nvmf.so.6.0 00:03:13.898 SYMLINK libspdk_event_nvmf.so 00:03:13.898 CC module/event/subsystems/iscsi/iscsi.o 00:03:14.158 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:14.158 LIB libspdk_event_iscsi.a 00:03:14.158 LIB libspdk_event_vhost_scsi.a 00:03:14.158 SO libspdk_event_iscsi.so.6.0 00:03:14.158 SO libspdk_event_vhost_scsi.so.3.0 00:03:14.418 SYMLINK libspdk_event_iscsi.so 00:03:14.418 SYMLINK libspdk_event_vhost_scsi.so 00:03:14.418 SO libspdk.so.6.0 00:03:14.418 SYMLINK libspdk.so 00:03:14.683 TEST_HEADER include/spdk/accel.h 00:03:14.683 TEST_HEADER include/spdk/accel_module.h 00:03:14.683 TEST_HEADER include/spdk/assert.h 00:03:14.683 CXX app/trace/trace.o 00:03:14.683 TEST_HEADER include/spdk/barrier.h 00:03:14.683 CC test/rpc_client/rpc_client_test.o 00:03:14.683 TEST_HEADER include/spdk/base64.h 00:03:14.683 TEST_HEADER include/spdk/bdev.h 00:03:14.683 TEST_HEADER include/spdk/bdev_module.h 00:03:14.683 TEST_HEADER include/spdk/bdev_zone.h 00:03:14.683 TEST_HEADER include/spdk/bit_array.h 00:03:14.683 TEST_HEADER include/spdk/bit_pool.h 00:03:14.683 TEST_HEADER include/spdk/blob_bdev.h 00:03:14.683 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:14.957 TEST_HEADER include/spdk/blobfs.h 00:03:14.957 TEST_HEADER include/spdk/blob.h 00:03:14.957 TEST_HEADER include/spdk/conf.h 00:03:14.957 TEST_HEADER include/spdk/config.h 00:03:14.957 TEST_HEADER include/spdk/cpuset.h 00:03:14.957 TEST_HEADER include/spdk/crc16.h 00:03:14.957 TEST_HEADER include/spdk/crc32.h 00:03:14.957 TEST_HEADER include/spdk/crc64.h 00:03:14.957 TEST_HEADER include/spdk/dif.h 00:03:14.957 TEST_HEADER include/spdk/dma.h 00:03:14.957 TEST_HEADER include/spdk/endian.h 00:03:14.957 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:14.957 TEST_HEADER include/spdk/env_dpdk.h 00:03:14.957 TEST_HEADER include/spdk/env.h 00:03:14.957 TEST_HEADER include/spdk/event.h 00:03:14.957 TEST_HEADER include/spdk/fd_group.h 00:03:14.957 TEST_HEADER include/spdk/fd.h 00:03:14.957 TEST_HEADER include/spdk/file.h 00:03:14.957 TEST_HEADER include/spdk/fsdev.h 00:03:14.957 TEST_HEADER include/spdk/fsdev_module.h 00:03:14.957 TEST_HEADER include/spdk/ftl.h 00:03:14.957 TEST_HEADER include/spdk/fuse_dispatcher.h 00:03:14.957 TEST_HEADER include/spdk/gpt_spec.h 00:03:14.957 TEST_HEADER include/spdk/hexlify.h 00:03:14.957 TEST_HEADER include/spdk/histogram_data.h 00:03:14.957 TEST_HEADER include/spdk/idxd.h 00:03:14.957 TEST_HEADER include/spdk/idxd_spec.h 00:03:14.957 TEST_HEADER include/spdk/init.h 00:03:14.957 TEST_HEADER include/spdk/ioat.h 00:03:14.957 TEST_HEADER include/spdk/ioat_spec.h 00:03:14.957 TEST_HEADER include/spdk/iscsi_spec.h 00:03:14.957 CC examples/util/zipf/zipf.o 00:03:14.957 TEST_HEADER include/spdk/json.h 00:03:14.957 TEST_HEADER include/spdk/jsonrpc.h 00:03:14.957 TEST_HEADER include/spdk/keyring.h 00:03:14.957 CC test/thread/poller_perf/poller_perf.o 00:03:14.957 TEST_HEADER include/spdk/keyring_module.h 00:03:14.957 TEST_HEADER include/spdk/likely.h 00:03:14.957 TEST_HEADER include/spdk/log.h 00:03:14.957 TEST_HEADER include/spdk/lvol.h 00:03:14.957 TEST_HEADER include/spdk/md5.h 00:03:14.957 CC examples/ioat/perf/perf.o 00:03:14.957 TEST_HEADER include/spdk/memory.h 00:03:14.957 TEST_HEADER include/spdk/mmio.h 00:03:14.957 TEST_HEADER include/spdk/nbd.h 00:03:14.957 TEST_HEADER include/spdk/net.h 00:03:14.957 TEST_HEADER include/spdk/notify.h 00:03:14.957 TEST_HEADER include/spdk/nvme.h 00:03:14.957 TEST_HEADER include/spdk/nvme_intel.h 00:03:14.957 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:14.957 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:14.957 TEST_HEADER include/spdk/nvme_spec.h 00:03:14.957 TEST_HEADER include/spdk/nvme_zns.h 00:03:14.957 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:14.957 CC test/dma/test_dma/test_dma.o 00:03:14.957 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:14.957 TEST_HEADER include/spdk/nvmf.h 00:03:14.957 TEST_HEADER include/spdk/nvmf_spec.h 00:03:14.957 TEST_HEADER include/spdk/nvmf_transport.h 00:03:14.957 TEST_HEADER include/spdk/opal.h 00:03:14.957 TEST_HEADER include/spdk/opal_spec.h 00:03:14.957 TEST_HEADER include/spdk/pci_ids.h 00:03:14.957 TEST_HEADER include/spdk/pipe.h 00:03:14.957 TEST_HEADER include/spdk/queue.h 00:03:14.957 TEST_HEADER include/spdk/reduce.h 00:03:14.957 CC test/app/bdev_svc/bdev_svc.o 00:03:14.957 TEST_HEADER include/spdk/rpc.h 00:03:14.957 TEST_HEADER include/spdk/scheduler.h 00:03:14.957 TEST_HEADER include/spdk/scsi.h 00:03:14.957 TEST_HEADER include/spdk/scsi_spec.h 00:03:14.957 TEST_HEADER include/spdk/sock.h 00:03:14.957 TEST_HEADER include/spdk/stdinc.h 00:03:14.957 TEST_HEADER include/spdk/string.h 00:03:14.957 CC test/env/mem_callbacks/mem_callbacks.o 00:03:14.957 TEST_HEADER include/spdk/thread.h 00:03:14.957 TEST_HEADER include/spdk/trace.h 00:03:14.957 TEST_HEADER include/spdk/trace_parser.h 00:03:14.957 TEST_HEADER include/spdk/tree.h 00:03:14.957 TEST_HEADER include/spdk/ublk.h 00:03:14.957 TEST_HEADER include/spdk/util.h 00:03:14.957 TEST_HEADER include/spdk/uuid.h 00:03:14.957 TEST_HEADER include/spdk/version.h 00:03:14.957 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:14.957 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:14.957 TEST_HEADER include/spdk/vhost.h 00:03:14.957 TEST_HEADER include/spdk/vmd.h 00:03:14.957 TEST_HEADER include/spdk/xor.h 00:03:14.957 TEST_HEADER include/spdk/zipf.h 00:03:14.957 CXX test/cpp_headers/accel.o 00:03:15.217 LINK rpc_client_test 00:03:15.217 LINK interrupt_tgt 00:03:15.217 LINK poller_perf 00:03:15.217 LINK zipf 00:03:15.217 LINK bdev_svc 00:03:15.217 CXX test/cpp_headers/accel_module.o 00:03:15.476 LINK ioat_perf 00:03:15.476 CC examples/ioat/verify/verify.o 00:03:15.476 LINK spdk_trace 00:03:15.476 CC test/env/vtophys/vtophys.o 00:03:15.476 CXX test/cpp_headers/assert.o 00:03:15.735 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:15.735 CC test/env/memory/memory_ut.o 00:03:15.735 LINK vtophys 00:03:15.735 LINK mem_callbacks 00:03:15.735 LINK verify 00:03:15.735 CC test/env/pci/pci_ut.o 00:03:15.735 LINK test_dma 00:03:15.735 CXX test/cpp_headers/barrier.o 00:03:15.994 LINK env_dpdk_post_init 00:03:15.994 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:15.994 CC app/trace_record/trace_record.o 00:03:15.994 CC test/app/histogram_perf/histogram_perf.o 00:03:16.252 CC test/app/jsoncat/jsoncat.o 00:03:16.252 CXX test/cpp_headers/base64.o 00:03:16.252 CXX test/cpp_headers/bdev.o 00:03:16.252 LINK histogram_perf 00:03:16.252 LINK pci_ut 00:03:16.252 CC test/app/stub/stub.o 00:03:16.252 CC examples/thread/thread/thread_ex.o 00:03:16.511 LINK jsoncat 00:03:16.511 LINK spdk_trace_record 00:03:16.511 LINK nvme_fuzz 00:03:16.511 CXX test/cpp_headers/bdev_module.o 00:03:16.770 CXX test/cpp_headers/bdev_zone.o 00:03:16.770 CXX test/cpp_headers/bit_array.o 00:03:16.770 LINK stub 00:03:16.770 CC test/event/event_perf/event_perf.o 00:03:16.770 CC app/nvmf_tgt/nvmf_main.o 00:03:16.770 CC test/nvme/aer/aer.o 00:03:16.770 LINK thread 00:03:16.771 CXX test/cpp_headers/bit_pool.o 00:03:17.029 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:17.029 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:17.029 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:17.029 CXX test/cpp_headers/blob_bdev.o 00:03:17.029 LINK event_perf 00:03:17.029 LINK nvmf_tgt 00:03:17.288 CXX test/cpp_headers/blobfs_bdev.o 00:03:17.288 CXX test/cpp_headers/blobfs.o 00:03:17.288 CC examples/sock/hello_world/hello_sock.o 00:03:17.288 LINK memory_ut 00:03:17.288 CC test/event/reactor/reactor.o 00:03:17.288 LINK aer 00:03:17.546 CXX test/cpp_headers/blob.o 00:03:17.546 CC test/event/reactor_perf/reactor_perf.o 00:03:17.546 LINK reactor 00:03:17.546 CC test/event/app_repeat/app_repeat.o 00:03:17.546 CXX test/cpp_headers/conf.o 00:03:17.805 LINK hello_sock 00:03:17.805 CC app/iscsi_tgt/iscsi_tgt.o 00:03:17.805 CC test/nvme/reset/reset.o 00:03:17.805 CXX test/cpp_headers/config.o 00:03:17.805 LINK reactor_perf 00:03:17.805 CXX test/cpp_headers/cpuset.o 00:03:17.805 LINK vhost_fuzz 00:03:17.805 LINK app_repeat 00:03:17.805 CC test/event/scheduler/scheduler.o 00:03:18.064 CXX test/cpp_headers/crc16.o 00:03:18.064 CXX test/cpp_headers/crc32.o 00:03:18.064 LINK iscsi_tgt 00:03:18.064 LINK reset 00:03:18.064 CC test/nvme/sgl/sgl.o 00:03:18.064 CC test/nvme/e2edp/nvme_dp.o 00:03:18.064 LINK scheduler 00:03:18.323 CC examples/vmd/lsvmd/lsvmd.o 00:03:18.323 CC examples/vmd/led/led.o 00:03:18.323 CXX test/cpp_headers/crc64.o 00:03:18.323 CXX test/cpp_headers/dif.o 00:03:18.323 CC test/nvme/overhead/overhead.o 00:03:18.323 LINK lsvmd 00:03:18.582 LINK nvme_dp 00:03:18.582 CXX test/cpp_headers/dma.o 00:03:18.582 CXX test/cpp_headers/endian.o 00:03:18.582 LINK sgl 00:03:18.582 LINK led 00:03:18.582 CXX test/cpp_headers/env_dpdk.o 00:03:18.841 CXX test/cpp_headers/env.o 00:03:18.841 CXX test/cpp_headers/event.o 00:03:18.841 CC app/spdk_tgt/spdk_tgt.o 00:03:18.841 CC examples/idxd/perf/perf.o 00:03:18.841 CC test/nvme/err_injection/err_injection.o 00:03:18.841 CXX test/cpp_headers/fd_group.o 00:03:18.841 CXX test/cpp_headers/fd.o 00:03:18.841 CXX test/cpp_headers/file.o 00:03:19.171 LINK overhead 00:03:19.171 CXX test/cpp_headers/fsdev.o 00:03:19.171 LINK spdk_tgt 00:03:19.171 LINK iscsi_fuzz 00:03:19.171 LINK err_injection 00:03:19.171 CC test/nvme/startup/startup.o 00:03:19.171 LINK idxd_perf 00:03:19.171 CC test/nvme/reserve/reserve.o 00:03:19.171 CXX test/cpp_headers/fsdev_module.o 00:03:19.439 CC test/nvme/simple_copy/simple_copy.o 00:03:19.440 CC test/nvme/connect_stress/connect_stress.o 00:03:19.440 CC test/nvme/boot_partition/boot_partition.o 00:03:19.440 LINK startup 00:03:19.698 CXX test/cpp_headers/ftl.o 00:03:19.698 CC test/nvme/compliance/nvme_compliance.o 00:03:19.698 CXX test/cpp_headers/fuse_dispatcher.o 00:03:19.698 LINK boot_partition 00:03:19.698 LINK reserve 00:03:19.698 LINK simple_copy 00:03:19.698 CC app/spdk_lspci/spdk_lspci.o 00:03:19.698 CC app/spdk_nvme_perf/perf.o 00:03:19.698 LINK connect_stress 00:03:19.698 CC examples/fsdev/hello_world/hello_fsdev.o 00:03:19.955 CXX test/cpp_headers/gpt_spec.o 00:03:19.955 CXX test/cpp_headers/hexlify.o 00:03:19.955 CC test/nvme/fused_ordering/fused_ordering.o 00:03:19.955 LINK spdk_lspci 00:03:19.955 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:19.955 LINK nvme_compliance 00:03:19.955 CXX test/cpp_headers/histogram_data.o 00:03:20.213 CC test/accel/dif/dif.o 00:03:20.213 CC app/spdk_nvme_identify/identify.o 00:03:20.213 CXX test/cpp_headers/idxd.o 00:03:20.213 LINK hello_fsdev 00:03:20.213 CC test/blobfs/mkfs/mkfs.o 00:03:20.213 LINK doorbell_aers 00:03:20.213 LINK fused_ordering 00:03:20.471 CC test/lvol/esnap/esnap.o 00:03:20.471 LINK mkfs 00:03:20.471 CC examples/accel/perf/accel_perf.o 00:03:20.471 CXX test/cpp_headers/idxd_spec.o 00:03:20.729 CXX test/cpp_headers/init.o 00:03:20.729 CXX test/cpp_headers/ioat.o 00:03:20.729 CXX test/cpp_headers/ioat_spec.o 00:03:20.729 CC test/nvme/fdp/fdp.o 00:03:20.987 CXX test/cpp_headers/iscsi_spec.o 00:03:20.987 CC app/spdk_nvme_discover/discovery_aer.o 00:03:20.987 LINK dif 00:03:21.247 CC examples/nvme/hello_world/hello_world.o 00:03:21.247 LINK spdk_nvme_discover 00:03:21.247 CXX test/cpp_headers/json.o 00:03:21.247 CC examples/blob/hello_world/hello_blob.o 00:03:21.247 LINK spdk_nvme_perf 00:03:21.247 CXX test/cpp_headers/jsonrpc.o 00:03:21.506 CXX test/cpp_headers/keyring.o 00:03:21.506 LINK spdk_nvme_identify 00:03:21.506 LINK fdp 00:03:21.506 LINK hello_world 00:03:21.506 CXX test/cpp_headers/keyring_module.o 00:03:21.506 CC examples/blob/cli/blobcli.o 00:03:21.506 LINK hello_blob 00:03:21.766 LINK accel_perf 00:03:21.766 CC test/nvme/cuse/cuse.o 00:03:21.766 CC examples/nvme/reconnect/reconnect.o 00:03:21.766 CC app/spdk_top/spdk_top.o 00:03:21.766 CC app/vhost/vhost.o 00:03:21.766 CXX test/cpp_headers/likely.o 00:03:22.026 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:22.026 LINK vhost 00:03:22.026 CC examples/nvme/arbitration/arbitration.o 00:03:22.026 LINK reconnect 00:03:22.286 CXX test/cpp_headers/log.o 00:03:22.286 CC examples/nvme/hotplug/hotplug.o 00:03:22.286 LINK blobcli 00:03:22.286 CXX test/cpp_headers/lvol.o 00:03:22.544 CXX test/cpp_headers/md5.o 00:03:22.544 CXX test/cpp_headers/memory.o 00:03:22.544 LINK arbitration 00:03:22.544 LINK hotplug 00:03:22.804 CXX test/cpp_headers/mmio.o 00:03:22.804 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:22.804 CC test/bdev/bdevio/bdevio.o 00:03:22.804 CC examples/nvme/abort/abort.o 00:03:22.804 CXX test/cpp_headers/nbd.o 00:03:22.804 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:22.804 CXX test/cpp_headers/net.o 00:03:22.804 LINK nvme_manage 00:03:23.105 LINK spdk_top 00:03:23.105 LINK pmr_persistence 00:03:23.105 LINK cmb_copy 00:03:23.105 CXX test/cpp_headers/notify.o 00:03:23.105 CC examples/bdev/hello_world/hello_bdev.o 00:03:23.466 CXX test/cpp_headers/nvme.o 00:03:23.466 CC examples/bdev/bdevperf/bdevperf.o 00:03:23.466 LINK abort 00:03:23.466 CC app/spdk_dd/spdk_dd.o 00:03:23.466 LINK bdevio 00:03:23.466 CXX test/cpp_headers/nvme_intel.o 00:03:23.466 LINK cuse 00:03:23.466 LINK hello_bdev 00:03:23.740 CXX test/cpp_headers/nvme_ocssd.o 00:03:23.740 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:23.740 CXX test/cpp_headers/nvme_spec.o 00:03:23.741 CC app/fio/nvme/fio_plugin.o 00:03:23.741 CXX test/cpp_headers/nvme_zns.o 00:03:23.741 CC app/fio/bdev/fio_plugin.o 00:03:23.741 CXX test/cpp_headers/nvmf_cmd.o 00:03:23.741 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:23.741 LINK spdk_dd 00:03:23.741 CXX test/cpp_headers/nvmf.o 00:03:23.741 CXX test/cpp_headers/nvmf_spec.o 00:03:23.998 CXX test/cpp_headers/nvmf_transport.o 00:03:23.998 CXX test/cpp_headers/opal.o 00:03:23.998 CXX test/cpp_headers/opal_spec.o 00:03:23.998 CXX test/cpp_headers/pci_ids.o 00:03:23.998 CXX test/cpp_headers/pipe.o 00:03:23.998 CXX test/cpp_headers/queue.o 00:03:23.998 CXX test/cpp_headers/reduce.o 00:03:24.256 CXX test/cpp_headers/rpc.o 00:03:24.256 CXX test/cpp_headers/scheduler.o 00:03:24.256 CXX test/cpp_headers/scsi.o 00:03:24.256 CXX test/cpp_headers/scsi_spec.o 00:03:24.256 CXX test/cpp_headers/sock.o 00:03:24.256 LINK spdk_bdev 00:03:24.256 LINK bdevperf 00:03:24.256 CXX test/cpp_headers/stdinc.o 00:03:24.256 CXX test/cpp_headers/string.o 00:03:24.256 CXX test/cpp_headers/thread.o 00:03:24.515 CXX test/cpp_headers/trace.o 00:03:24.515 CXX test/cpp_headers/trace_parser.o 00:03:24.515 CXX test/cpp_headers/tree.o 00:03:24.515 LINK spdk_nvme 00:03:24.515 CXX test/cpp_headers/ublk.o 00:03:24.515 CXX test/cpp_headers/util.o 00:03:24.515 CXX test/cpp_headers/uuid.o 00:03:24.515 CXX test/cpp_headers/version.o 00:03:24.515 CXX test/cpp_headers/vfio_user_pci.o 00:03:24.515 CXX test/cpp_headers/vfio_user_spec.o 00:03:24.515 CXX test/cpp_headers/vhost.o 00:03:24.515 CXX test/cpp_headers/vmd.o 00:03:24.774 CXX test/cpp_headers/xor.o 00:03:24.774 CXX test/cpp_headers/zipf.o 00:03:24.774 CC examples/nvmf/nvmf/nvmf.o 00:03:25.341 LINK nvmf 00:03:27.246 LINK esnap 00:03:27.815 00:03:27.815 real 1m33.734s 00:03:27.815 user 8m26.223s 00:03:27.815 sys 1m59.559s 00:03:27.815 15:11:14 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:27.815 15:11:14 make -- common/autotest_common.sh@10 -- $ set +x 00:03:27.815 ************************************ 00:03:27.815 END TEST make 00:03:27.815 ************************************ 00:03:27.815 15:11:14 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:27.815 15:11:14 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:27.815 15:11:14 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:27.815 15:11:14 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:27.815 15:11:14 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:03:27.815 15:11:14 -- pm/common@44 -- $ pid=5250 00:03:27.815 15:11:14 -- pm/common@50 -- $ kill -TERM 5250 00:03:27.816 15:11:14 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:27.816 15:11:14 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:03:27.816 15:11:14 -- pm/common@44 -- $ pid=5252 00:03:27.816 15:11:14 -- pm/common@50 -- $ kill -TERM 5252 00:03:27.816 15:11:14 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:03:27.816 15:11:14 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:03:27.816 15:11:14 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:27.816 15:11:14 -- common/autotest_common.sh@1693 -- # lcov --version 00:03:27.816 15:11:14 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:28.075 15:11:14 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:28.075 15:11:14 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:28.075 15:11:14 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:28.075 15:11:14 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:28.075 15:11:14 -- scripts/common.sh@336 -- # IFS=.-: 00:03:28.075 15:11:14 -- scripts/common.sh@336 -- # read -ra ver1 00:03:28.075 15:11:14 -- scripts/common.sh@337 -- # IFS=.-: 00:03:28.075 15:11:14 -- scripts/common.sh@337 -- # read -ra ver2 00:03:28.075 15:11:14 -- scripts/common.sh@338 -- # local 'op=<' 00:03:28.075 15:11:14 -- scripts/common.sh@340 -- # ver1_l=2 00:03:28.075 15:11:14 -- scripts/common.sh@341 -- # ver2_l=1 00:03:28.075 15:11:14 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:28.075 15:11:14 -- scripts/common.sh@344 -- # case "$op" in 00:03:28.075 15:11:14 -- scripts/common.sh@345 -- # : 1 00:03:28.075 15:11:14 -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:28.075 15:11:14 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:28.075 15:11:14 -- scripts/common.sh@365 -- # decimal 1 00:03:28.075 15:11:14 -- scripts/common.sh@353 -- # local d=1 00:03:28.075 15:11:14 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:28.075 15:11:14 -- scripts/common.sh@355 -- # echo 1 00:03:28.075 15:11:14 -- scripts/common.sh@365 -- # ver1[v]=1 00:03:28.075 15:11:14 -- scripts/common.sh@366 -- # decimal 2 00:03:28.075 15:11:14 -- scripts/common.sh@353 -- # local d=2 00:03:28.075 15:11:14 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:28.075 15:11:14 -- scripts/common.sh@355 -- # echo 2 00:03:28.075 15:11:14 -- scripts/common.sh@366 -- # ver2[v]=2 00:03:28.075 15:11:14 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:28.075 15:11:14 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:28.075 15:11:14 -- scripts/common.sh@368 -- # return 0 00:03:28.075 15:11:14 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:28.075 15:11:14 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:28.075 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:28.075 --rc genhtml_branch_coverage=1 00:03:28.075 --rc genhtml_function_coverage=1 00:03:28.075 --rc genhtml_legend=1 00:03:28.075 --rc geninfo_all_blocks=1 00:03:28.075 --rc geninfo_unexecuted_blocks=1 00:03:28.075 00:03:28.075 ' 00:03:28.075 15:11:14 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:28.075 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:28.075 --rc genhtml_branch_coverage=1 00:03:28.075 --rc genhtml_function_coverage=1 00:03:28.075 --rc genhtml_legend=1 00:03:28.075 --rc geninfo_all_blocks=1 00:03:28.075 --rc geninfo_unexecuted_blocks=1 00:03:28.075 00:03:28.075 ' 00:03:28.075 15:11:14 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:28.075 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:28.075 --rc genhtml_branch_coverage=1 00:03:28.075 --rc genhtml_function_coverage=1 00:03:28.075 --rc genhtml_legend=1 00:03:28.075 --rc geninfo_all_blocks=1 00:03:28.075 --rc geninfo_unexecuted_blocks=1 00:03:28.075 00:03:28.075 ' 00:03:28.075 15:11:14 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:28.075 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:28.075 --rc genhtml_branch_coverage=1 00:03:28.075 --rc genhtml_function_coverage=1 00:03:28.075 --rc genhtml_legend=1 00:03:28.075 --rc geninfo_all_blocks=1 00:03:28.075 --rc geninfo_unexecuted_blocks=1 00:03:28.075 00:03:28.075 ' 00:03:28.075 15:11:14 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:28.075 15:11:14 -- nvmf/common.sh@7 -- # uname -s 00:03:28.075 15:11:14 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:28.075 15:11:14 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:28.075 15:11:14 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:28.075 15:11:14 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:28.075 15:11:14 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:28.076 15:11:14 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:28.076 15:11:14 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:28.076 15:11:14 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:28.076 15:11:14 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:28.076 15:11:14 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:28.076 15:11:14 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f2c1538a-d621-4ee3-bb31-0925b497de45 00:03:28.076 15:11:14 -- nvmf/common.sh@18 -- # NVME_HOSTID=f2c1538a-d621-4ee3-bb31-0925b497de45 00:03:28.076 15:11:14 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:28.076 15:11:14 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:28.076 15:11:14 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:28.076 15:11:14 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:28.076 15:11:14 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:28.076 15:11:14 -- scripts/common.sh@15 -- # shopt -s extglob 00:03:28.076 15:11:14 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:28.076 15:11:14 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:28.076 15:11:14 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:28.076 15:11:14 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:28.076 15:11:14 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:28.076 15:11:14 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:28.076 15:11:14 -- paths/export.sh@5 -- # export PATH 00:03:28.076 15:11:14 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:28.076 15:11:14 -- nvmf/common.sh@51 -- # : 0 00:03:28.076 15:11:14 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:28.076 15:11:14 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:28.076 15:11:14 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:28.076 15:11:14 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:28.076 15:11:14 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:28.076 15:11:14 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:28.076 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:28.076 15:11:14 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:28.076 15:11:14 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:28.076 15:11:14 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:28.076 15:11:14 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:28.076 15:11:14 -- spdk/autotest.sh@32 -- # uname -s 00:03:28.076 15:11:14 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:28.076 15:11:14 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:28.076 15:11:14 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:28.076 15:11:14 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:28.076 15:11:14 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:28.076 15:11:14 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:28.076 15:11:14 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:28.076 15:11:14 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:28.076 15:11:14 -- spdk/autotest.sh@48 -- # udevadm_pid=54318 00:03:28.076 15:11:14 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:28.076 15:11:14 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:28.076 15:11:14 -- pm/common@17 -- # local monitor 00:03:28.076 15:11:14 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:28.076 15:11:14 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:28.076 15:11:14 -- pm/common@25 -- # sleep 1 00:03:28.076 15:11:14 -- pm/common@21 -- # date +%s 00:03:28.076 15:11:14 -- pm/common@21 -- # date +%s 00:03:28.076 15:11:14 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732115474 00:03:28.076 15:11:14 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732115474 00:03:28.076 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732115474_collect-vmstat.pm.log 00:03:28.076 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732115474_collect-cpu-load.pm.log 00:03:29.015 15:11:15 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:29.015 15:11:15 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:29.015 15:11:15 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:29.015 15:11:15 -- common/autotest_common.sh@10 -- # set +x 00:03:29.015 15:11:15 -- spdk/autotest.sh@59 -- # create_test_list 00:03:29.015 15:11:15 -- common/autotest_common.sh@752 -- # xtrace_disable 00:03:29.015 15:11:15 -- common/autotest_common.sh@10 -- # set +x 00:03:29.274 15:11:15 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:29.274 15:11:15 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:29.274 15:11:15 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:03:29.274 15:11:15 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:29.274 15:11:15 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:03:29.275 15:11:15 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:29.275 15:11:15 -- common/autotest_common.sh@1457 -- # uname 00:03:29.275 15:11:15 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:03:29.275 15:11:15 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:29.275 15:11:15 -- common/autotest_common.sh@1477 -- # uname 00:03:29.275 15:11:15 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:03:29.275 15:11:15 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:03:29.275 15:11:15 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:29.275 lcov: LCOV version 1.15 00:03:29.275 15:11:15 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:03:44.148 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:44.148 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:03:59.069 15:11:45 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:03:59.069 15:11:45 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:59.069 15:11:45 -- common/autotest_common.sh@10 -- # set +x 00:03:59.327 15:11:45 -- spdk/autotest.sh@78 -- # rm -f 00:03:59.327 15:11:45 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:59.892 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:59.892 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:03:59.892 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:03:59.892 15:11:46 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:03:59.892 15:11:46 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:03:59.892 15:11:46 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:03:59.892 15:11:46 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:03:59.892 15:11:46 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:03:59.892 15:11:46 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:03:59.892 15:11:46 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:03:59.892 15:11:46 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:59.892 15:11:46 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:59.892 15:11:46 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:03:59.892 15:11:46 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:03:59.892 15:11:46 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:03:59.892 15:11:46 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:59.892 15:11:46 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:59.892 15:11:46 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:03:59.892 15:11:46 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n2 00:03:59.892 15:11:46 -- common/autotest_common.sh@1650 -- # local device=nvme1n2 00:03:59.892 15:11:46 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:03:59.892 15:11:46 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:59.892 15:11:46 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:03:59.892 15:11:46 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n3 00:03:59.892 15:11:46 -- common/autotest_common.sh@1650 -- # local device=nvme1n3 00:03:59.892 15:11:46 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:03:59.892 15:11:46 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:59.892 15:11:46 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:03:59.892 15:11:46 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:59.892 15:11:46 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:59.892 15:11:46 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:03:59.892 15:11:46 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:03:59.892 15:11:46 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:59.892 No valid GPT data, bailing 00:03:59.892 15:11:46 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:59.892 15:11:46 -- scripts/common.sh@394 -- # pt= 00:03:59.892 15:11:46 -- scripts/common.sh@395 -- # return 1 00:03:59.892 15:11:46 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:59.892 1+0 records in 00:03:59.892 1+0 records out 00:03:59.892 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00392329 s, 267 MB/s 00:03:59.892 15:11:46 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:59.892 15:11:46 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:59.892 15:11:46 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:03:59.892 15:11:46 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:03:59.892 15:11:46 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:04:00.149 No valid GPT data, bailing 00:04:00.149 15:11:46 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:00.149 15:11:46 -- scripts/common.sh@394 -- # pt= 00:04:00.149 15:11:46 -- scripts/common.sh@395 -- # return 1 00:04:00.149 15:11:46 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:04:00.149 1+0 records in 00:04:00.149 1+0 records out 00:04:00.149 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00373326 s, 281 MB/s 00:04:00.149 15:11:46 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:00.149 15:11:46 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:00.149 15:11:46 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:04:00.149 15:11:46 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:04:00.149 15:11:46 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:04:00.149 No valid GPT data, bailing 00:04:00.149 15:11:46 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:04:00.149 15:11:46 -- scripts/common.sh@394 -- # pt= 00:04:00.149 15:11:46 -- scripts/common.sh@395 -- # return 1 00:04:00.149 15:11:46 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:04:00.149 1+0 records in 00:04:00.149 1+0 records out 00:04:00.149 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00354384 s, 296 MB/s 00:04:00.149 15:11:46 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:00.149 15:11:46 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:00.149 15:11:46 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:04:00.149 15:11:46 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:04:00.149 15:11:46 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:04:00.149 No valid GPT data, bailing 00:04:00.149 15:11:46 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:04:00.149 15:11:46 -- scripts/common.sh@394 -- # pt= 00:04:00.149 15:11:46 -- scripts/common.sh@395 -- # return 1 00:04:00.149 15:11:46 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:04:00.149 1+0 records in 00:04:00.149 1+0 records out 00:04:00.149 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00354067 s, 296 MB/s 00:04:00.149 15:11:46 -- spdk/autotest.sh@105 -- # sync 00:04:00.437 15:11:46 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:00.437 15:11:46 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:00.437 15:11:46 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:02.969 15:11:49 -- spdk/autotest.sh@111 -- # uname -s 00:04:02.969 15:11:49 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:04:02.969 15:11:49 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:04:02.969 15:11:49 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:03.913 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:03.913 Hugepages 00:04:03.913 node hugesize free / total 00:04:03.913 node0 1048576kB 0 / 0 00:04:03.913 node0 2048kB 0 / 0 00:04:03.913 00:04:03.913 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:03.913 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:03.913 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:04:04.172 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:04:04.172 15:11:50 -- spdk/autotest.sh@117 -- # uname -s 00:04:04.172 15:11:50 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:04:04.172 15:11:50 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:04:04.172 15:11:50 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:05.114 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:05.114 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:05.114 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:05.114 15:11:51 -- common/autotest_common.sh@1517 -- # sleep 1 00:04:06.050 15:11:52 -- common/autotest_common.sh@1518 -- # bdfs=() 00:04:06.050 15:11:52 -- common/autotest_common.sh@1518 -- # local bdfs 00:04:06.050 15:11:52 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:04:06.050 15:11:52 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:04:06.050 15:11:52 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:06.050 15:11:52 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:06.050 15:11:52 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:06.050 15:11:52 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:06.050 15:11:52 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:06.308 15:11:52 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:04:06.308 15:11:52 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:06.308 15:11:52 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:06.567 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:06.567 Waiting for block devices as requested 00:04:06.826 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:04:06.826 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:04:06.826 15:11:53 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:07.084 15:11:53 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:04:07.084 15:11:53 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:04:07.084 15:11:53 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:07.084 15:11:53 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:07.084 15:11:53 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:04:07.084 15:11:53 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:07.084 15:11:53 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:04:07.084 15:11:53 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:04:07.084 15:11:53 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:04:07.084 15:11:53 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:04:07.084 15:11:53 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:07.084 15:11:53 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:07.084 15:11:53 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:07.084 15:11:53 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:07.084 15:11:53 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:07.084 15:11:53 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:07.084 15:11:53 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:04:07.084 15:11:53 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:07.084 15:11:53 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:07.084 15:11:53 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:07.084 15:11:53 -- common/autotest_common.sh@1543 -- # continue 00:04:07.084 15:11:53 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:07.084 15:11:53 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:04:07.084 15:11:53 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:07.084 15:11:53 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:04:07.084 15:11:53 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:07.084 15:11:53 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:04:07.084 15:11:53 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:07.084 15:11:53 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:04:07.084 15:11:53 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:04:07.084 15:11:53 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:04:07.084 15:11:53 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:04:07.084 15:11:53 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:07.084 15:11:53 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:07.084 15:11:53 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:07.084 15:11:53 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:07.084 15:11:53 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:07.084 15:11:53 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:04:07.084 15:11:53 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:07.084 15:11:53 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:07.084 15:11:53 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:07.084 15:11:53 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:07.084 15:11:53 -- common/autotest_common.sh@1543 -- # continue 00:04:07.084 15:11:53 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:04:07.084 15:11:53 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:07.084 15:11:53 -- common/autotest_common.sh@10 -- # set +x 00:04:07.084 15:11:53 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:04:07.084 15:11:53 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:07.084 15:11:53 -- common/autotest_common.sh@10 -- # set +x 00:04:07.084 15:11:53 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:08.019 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:08.019 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:08.019 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:08.019 15:11:54 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:08.019 15:11:54 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:08.019 15:11:54 -- common/autotest_common.sh@10 -- # set +x 00:04:08.280 15:11:54 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:08.280 15:11:54 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:04:08.280 15:11:54 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:04:08.280 15:11:54 -- common/autotest_common.sh@1563 -- # bdfs=() 00:04:08.280 15:11:54 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:04:08.280 15:11:54 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:04:08.280 15:11:54 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:04:08.280 15:11:54 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:04:08.280 15:11:54 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:08.280 15:11:54 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:08.280 15:11:54 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:08.280 15:11:54 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:08.280 15:11:54 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:08.280 15:11:54 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:04:08.280 15:11:54 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:08.280 15:11:54 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:08.280 15:11:54 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:04:08.280 15:11:54 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:08.280 15:11:54 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:08.280 15:11:54 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:08.280 15:11:54 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:04:08.280 15:11:54 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:08.280 15:11:54 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:08.280 15:11:54 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:04:08.280 15:11:54 -- common/autotest_common.sh@1572 -- # return 0 00:04:08.280 15:11:54 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:04:08.280 15:11:54 -- common/autotest_common.sh@1580 -- # return 0 00:04:08.280 15:11:54 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:08.280 15:11:54 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:08.280 15:11:54 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:08.280 15:11:54 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:08.280 15:11:54 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:08.280 15:11:54 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:08.280 15:11:54 -- common/autotest_common.sh@10 -- # set +x 00:04:08.280 15:11:54 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:04:08.280 15:11:54 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:08.280 15:11:54 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:08.280 15:11:54 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:08.280 15:11:54 -- common/autotest_common.sh@10 -- # set +x 00:04:08.280 ************************************ 00:04:08.280 START TEST env 00:04:08.280 ************************************ 00:04:08.280 15:11:54 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:08.280 * Looking for test storage... 00:04:08.280 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:04:08.280 15:11:54 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:08.280 15:11:54 env -- common/autotest_common.sh@1693 -- # lcov --version 00:04:08.281 15:11:54 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:08.541 15:11:54 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:08.541 15:11:54 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:08.541 15:11:54 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:08.541 15:11:54 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:08.541 15:11:54 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:08.541 15:11:54 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:08.541 15:11:54 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:08.541 15:11:54 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:08.542 15:11:54 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:08.542 15:11:54 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:08.542 15:11:54 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:08.542 15:11:54 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:08.542 15:11:54 env -- scripts/common.sh@344 -- # case "$op" in 00:04:08.542 15:11:54 env -- scripts/common.sh@345 -- # : 1 00:04:08.542 15:11:54 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:08.542 15:11:54 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:08.542 15:11:54 env -- scripts/common.sh@365 -- # decimal 1 00:04:08.542 15:11:54 env -- scripts/common.sh@353 -- # local d=1 00:04:08.542 15:11:54 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:08.542 15:11:54 env -- scripts/common.sh@355 -- # echo 1 00:04:08.542 15:11:54 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:08.542 15:11:54 env -- scripts/common.sh@366 -- # decimal 2 00:04:08.542 15:11:54 env -- scripts/common.sh@353 -- # local d=2 00:04:08.542 15:11:54 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:08.542 15:11:54 env -- scripts/common.sh@355 -- # echo 2 00:04:08.542 15:11:54 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:08.542 15:11:54 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:08.542 15:11:54 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:08.542 15:11:54 env -- scripts/common.sh@368 -- # return 0 00:04:08.542 15:11:54 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:08.542 15:11:54 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:08.542 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:08.542 --rc genhtml_branch_coverage=1 00:04:08.542 --rc genhtml_function_coverage=1 00:04:08.542 --rc genhtml_legend=1 00:04:08.542 --rc geninfo_all_blocks=1 00:04:08.542 --rc geninfo_unexecuted_blocks=1 00:04:08.542 00:04:08.542 ' 00:04:08.542 15:11:54 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:08.542 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:08.542 --rc genhtml_branch_coverage=1 00:04:08.542 --rc genhtml_function_coverage=1 00:04:08.542 --rc genhtml_legend=1 00:04:08.542 --rc geninfo_all_blocks=1 00:04:08.542 --rc geninfo_unexecuted_blocks=1 00:04:08.542 00:04:08.542 ' 00:04:08.542 15:11:54 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:08.542 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:08.542 --rc genhtml_branch_coverage=1 00:04:08.542 --rc genhtml_function_coverage=1 00:04:08.542 --rc genhtml_legend=1 00:04:08.542 --rc geninfo_all_blocks=1 00:04:08.542 --rc geninfo_unexecuted_blocks=1 00:04:08.542 00:04:08.542 ' 00:04:08.542 15:11:54 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:08.542 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:08.542 --rc genhtml_branch_coverage=1 00:04:08.542 --rc genhtml_function_coverage=1 00:04:08.542 --rc genhtml_legend=1 00:04:08.542 --rc geninfo_all_blocks=1 00:04:08.542 --rc geninfo_unexecuted_blocks=1 00:04:08.542 00:04:08.542 ' 00:04:08.542 15:11:54 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:08.542 15:11:54 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:08.542 15:11:54 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:08.542 15:11:54 env -- common/autotest_common.sh@10 -- # set +x 00:04:08.542 ************************************ 00:04:08.542 START TEST env_memory 00:04:08.542 ************************************ 00:04:08.542 15:11:54 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:08.542 00:04:08.542 00:04:08.542 CUnit - A unit testing framework for C - Version 2.1-3 00:04:08.542 http://cunit.sourceforge.net/ 00:04:08.542 00:04:08.542 00:04:08.542 Suite: memory 00:04:08.542 Test: alloc and free memory map ...[2024-11-20 15:11:54.949174] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:08.542 passed 00:04:08.542 Test: mem map translation ...[2024-11-20 15:11:54.998100] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:08.542 [2024-11-20 15:11:54.998190] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:08.542 [2024-11-20 15:11:54.998262] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:08.542 [2024-11-20 15:11:54.998288] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:08.801 passed 00:04:08.801 Test: mem map registration ...[2024-11-20 15:11:55.072536] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:08.801 [2024-11-20 15:11:55.072626] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:08.801 passed 00:04:08.801 Test: mem map adjacent registrations ...passed 00:04:08.801 00:04:08.801 Run Summary: Type Total Ran Passed Failed Inactive 00:04:08.801 suites 1 1 n/a 0 0 00:04:08.801 tests 4 4 4 0 0 00:04:08.801 asserts 152 152 152 0 n/a 00:04:08.801 00:04:08.801 Elapsed time = 0.268 seconds 00:04:08.801 00:04:08.801 real 0m0.334s 00:04:08.801 user 0m0.278s 00:04:08.801 sys 0m0.045s 00:04:08.801 15:11:55 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:08.801 15:11:55 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:08.801 ************************************ 00:04:08.801 END TEST env_memory 00:04:08.801 ************************************ 00:04:08.801 15:11:55 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:08.801 15:11:55 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:08.801 15:11:55 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:08.801 15:11:55 env -- common/autotest_common.sh@10 -- # set +x 00:04:08.801 ************************************ 00:04:08.801 START TEST env_vtophys 00:04:08.801 ************************************ 00:04:08.801 15:11:55 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:09.061 EAL: lib.eal log level changed from notice to debug 00:04:09.061 EAL: Detected lcore 0 as core 0 on socket 0 00:04:09.061 EAL: Detected lcore 1 as core 0 on socket 0 00:04:09.061 EAL: Detected lcore 2 as core 0 on socket 0 00:04:09.061 EAL: Detected lcore 3 as core 0 on socket 0 00:04:09.061 EAL: Detected lcore 4 as core 0 on socket 0 00:04:09.061 EAL: Detected lcore 5 as core 0 on socket 0 00:04:09.061 EAL: Detected lcore 6 as core 0 on socket 0 00:04:09.061 EAL: Detected lcore 7 as core 0 on socket 0 00:04:09.061 EAL: Detected lcore 8 as core 0 on socket 0 00:04:09.061 EAL: Detected lcore 9 as core 0 on socket 0 00:04:09.061 EAL: Maximum logical cores by configuration: 128 00:04:09.061 EAL: Detected CPU lcores: 10 00:04:09.061 EAL: Detected NUMA nodes: 1 00:04:09.061 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:09.061 EAL: Detected shared linkage of DPDK 00:04:09.061 EAL: No shared files mode enabled, IPC will be disabled 00:04:09.061 EAL: Selected IOVA mode 'PA' 00:04:09.061 EAL: Probing VFIO support... 00:04:09.061 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:09.061 EAL: VFIO modules not loaded, skipping VFIO support... 00:04:09.061 EAL: Ask a virtual area of 0x2e000 bytes 00:04:09.061 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:09.061 EAL: Setting up physically contiguous memory... 00:04:09.061 EAL: Setting maximum number of open files to 524288 00:04:09.061 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:09.061 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:09.061 EAL: Ask a virtual area of 0x61000 bytes 00:04:09.061 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:09.061 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:09.061 EAL: Ask a virtual area of 0x400000000 bytes 00:04:09.061 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:09.061 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:09.061 EAL: Ask a virtual area of 0x61000 bytes 00:04:09.061 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:09.061 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:09.061 EAL: Ask a virtual area of 0x400000000 bytes 00:04:09.061 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:09.061 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:09.061 EAL: Ask a virtual area of 0x61000 bytes 00:04:09.061 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:09.061 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:09.061 EAL: Ask a virtual area of 0x400000000 bytes 00:04:09.061 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:09.061 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:09.061 EAL: Ask a virtual area of 0x61000 bytes 00:04:09.061 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:09.061 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:09.061 EAL: Ask a virtual area of 0x400000000 bytes 00:04:09.061 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:09.061 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:09.061 EAL: Hugepages will be freed exactly as allocated. 00:04:09.061 EAL: No shared files mode enabled, IPC is disabled 00:04:09.061 EAL: No shared files mode enabled, IPC is disabled 00:04:09.061 EAL: TSC frequency is ~2490000 KHz 00:04:09.061 EAL: Main lcore 0 is ready (tid=7f1f814f2a40;cpuset=[0]) 00:04:09.061 EAL: Trying to obtain current memory policy. 00:04:09.061 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:09.061 EAL: Restoring previous memory policy: 0 00:04:09.061 EAL: request: mp_malloc_sync 00:04:09.061 EAL: No shared files mode enabled, IPC is disabled 00:04:09.061 EAL: Heap on socket 0 was expanded by 2MB 00:04:09.061 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:09.061 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:09.061 EAL: Mem event callback 'spdk:(nil)' registered 00:04:09.061 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:04:09.061 00:04:09.061 00:04:09.062 CUnit - A unit testing framework for C - Version 2.1-3 00:04:09.062 http://cunit.sourceforge.net/ 00:04:09.062 00:04:09.062 00:04:09.062 Suite: components_suite 00:04:09.630 Test: vtophys_malloc_test ...passed 00:04:09.630 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:09.630 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:09.630 EAL: Restoring previous memory policy: 4 00:04:09.630 EAL: Calling mem event callback 'spdk:(nil)' 00:04:09.630 EAL: request: mp_malloc_sync 00:04:09.630 EAL: No shared files mode enabled, IPC is disabled 00:04:09.630 EAL: Heap on socket 0 was expanded by 4MB 00:04:09.630 EAL: Calling mem event callback 'spdk:(nil)' 00:04:09.630 EAL: request: mp_malloc_sync 00:04:09.630 EAL: No shared files mode enabled, IPC is disabled 00:04:09.630 EAL: Heap on socket 0 was shrunk by 4MB 00:04:09.630 EAL: Trying to obtain current memory policy. 00:04:09.630 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:09.630 EAL: Restoring previous memory policy: 4 00:04:09.630 EAL: Calling mem event callback 'spdk:(nil)' 00:04:09.630 EAL: request: mp_malloc_sync 00:04:09.630 EAL: No shared files mode enabled, IPC is disabled 00:04:09.630 EAL: Heap on socket 0 was expanded by 6MB 00:04:09.630 EAL: Calling mem event callback 'spdk:(nil)' 00:04:09.630 EAL: request: mp_malloc_sync 00:04:09.630 EAL: No shared files mode enabled, IPC is disabled 00:04:09.630 EAL: Heap on socket 0 was shrunk by 6MB 00:04:09.630 EAL: Trying to obtain current memory policy. 00:04:09.630 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:09.630 EAL: Restoring previous memory policy: 4 00:04:09.630 EAL: Calling mem event callback 'spdk:(nil)' 00:04:09.630 EAL: request: mp_malloc_sync 00:04:09.630 EAL: No shared files mode enabled, IPC is disabled 00:04:09.630 EAL: Heap on socket 0 was expanded by 10MB 00:04:09.630 EAL: Calling mem event callback 'spdk:(nil)' 00:04:09.630 EAL: request: mp_malloc_sync 00:04:09.630 EAL: No shared files mode enabled, IPC is disabled 00:04:09.630 EAL: Heap on socket 0 was shrunk by 10MB 00:04:09.630 EAL: Trying to obtain current memory policy. 00:04:09.630 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:09.630 EAL: Restoring previous memory policy: 4 00:04:09.630 EAL: Calling mem event callback 'spdk:(nil)' 00:04:09.630 EAL: request: mp_malloc_sync 00:04:09.630 EAL: No shared files mode enabled, IPC is disabled 00:04:09.630 EAL: Heap on socket 0 was expanded by 18MB 00:04:09.630 EAL: Calling mem event callback 'spdk:(nil)' 00:04:09.630 EAL: request: mp_malloc_sync 00:04:09.630 EAL: No shared files mode enabled, IPC is disabled 00:04:09.630 EAL: Heap on socket 0 was shrunk by 18MB 00:04:09.630 EAL: Trying to obtain current memory policy. 00:04:09.630 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:09.890 EAL: Restoring previous memory policy: 4 00:04:09.890 EAL: Calling mem event callback 'spdk:(nil)' 00:04:09.890 EAL: request: mp_malloc_sync 00:04:09.890 EAL: No shared files mode enabled, IPC is disabled 00:04:09.890 EAL: Heap on socket 0 was expanded by 34MB 00:04:09.890 EAL: Calling mem event callback 'spdk:(nil)' 00:04:09.890 EAL: request: mp_malloc_sync 00:04:09.890 EAL: No shared files mode enabled, IPC is disabled 00:04:09.890 EAL: Heap on socket 0 was shrunk by 34MB 00:04:09.890 EAL: Trying to obtain current memory policy. 00:04:09.890 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:09.890 EAL: Restoring previous memory policy: 4 00:04:09.890 EAL: Calling mem event callback 'spdk:(nil)' 00:04:09.890 EAL: request: mp_malloc_sync 00:04:09.890 EAL: No shared files mode enabled, IPC is disabled 00:04:09.890 EAL: Heap on socket 0 was expanded by 66MB 00:04:10.150 EAL: Calling mem event callback 'spdk:(nil)' 00:04:10.150 EAL: request: mp_malloc_sync 00:04:10.150 EAL: No shared files mode enabled, IPC is disabled 00:04:10.150 EAL: Heap on socket 0 was shrunk by 66MB 00:04:10.150 EAL: Trying to obtain current memory policy. 00:04:10.150 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:10.150 EAL: Restoring previous memory policy: 4 00:04:10.150 EAL: Calling mem event callback 'spdk:(nil)' 00:04:10.150 EAL: request: mp_malloc_sync 00:04:10.150 EAL: No shared files mode enabled, IPC is disabled 00:04:10.150 EAL: Heap on socket 0 was expanded by 130MB 00:04:10.408 EAL: Calling mem event callback 'spdk:(nil)' 00:04:10.408 EAL: request: mp_malloc_sync 00:04:10.408 EAL: No shared files mode enabled, IPC is disabled 00:04:10.408 EAL: Heap on socket 0 was shrunk by 130MB 00:04:10.667 EAL: Trying to obtain current memory policy. 00:04:10.667 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:10.667 EAL: Restoring previous memory policy: 4 00:04:10.667 EAL: Calling mem event callback 'spdk:(nil)' 00:04:10.667 EAL: request: mp_malloc_sync 00:04:10.667 EAL: No shared files mode enabled, IPC is disabled 00:04:10.667 EAL: Heap on socket 0 was expanded by 258MB 00:04:11.237 EAL: Calling mem event callback 'spdk:(nil)' 00:04:11.237 EAL: request: mp_malloc_sync 00:04:11.237 EAL: No shared files mode enabled, IPC is disabled 00:04:11.237 EAL: Heap on socket 0 was shrunk by 258MB 00:04:11.807 EAL: Trying to obtain current memory policy. 00:04:11.807 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:11.807 EAL: Restoring previous memory policy: 4 00:04:11.807 EAL: Calling mem event callback 'spdk:(nil)' 00:04:11.807 EAL: request: mp_malloc_sync 00:04:11.807 EAL: No shared files mode enabled, IPC is disabled 00:04:11.807 EAL: Heap on socket 0 was expanded by 514MB 00:04:12.801 EAL: Calling mem event callback 'spdk:(nil)' 00:04:12.801 EAL: request: mp_malloc_sync 00:04:12.801 EAL: No shared files mode enabled, IPC is disabled 00:04:12.801 EAL: Heap on socket 0 was shrunk by 514MB 00:04:13.741 EAL: Trying to obtain current memory policy. 00:04:13.741 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:14.001 EAL: Restoring previous memory policy: 4 00:04:14.001 EAL: Calling mem event callback 'spdk:(nil)' 00:04:14.001 EAL: request: mp_malloc_sync 00:04:14.001 EAL: No shared files mode enabled, IPC is disabled 00:04:14.001 EAL: Heap on socket 0 was expanded by 1026MB 00:04:15.908 EAL: Calling mem event callback 'spdk:(nil)' 00:04:15.908 EAL: request: mp_malloc_sync 00:04:15.908 EAL: No shared files mode enabled, IPC is disabled 00:04:15.908 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:17.922 passed 00:04:17.922 00:04:17.922 Run Summary: Type Total Ran Passed Failed Inactive 00:04:17.922 suites 1 1 n/a 0 0 00:04:17.922 tests 2 2 2 0 0 00:04:17.922 asserts 5852 5852 5852 0 n/a 00:04:17.922 00:04:17.922 Elapsed time = 8.449 seconds 00:04:17.922 EAL: Calling mem event callback 'spdk:(nil)' 00:04:17.922 EAL: request: mp_malloc_sync 00:04:17.922 EAL: No shared files mode enabled, IPC is disabled 00:04:17.922 EAL: Heap on socket 0 was shrunk by 2MB 00:04:17.922 EAL: No shared files mode enabled, IPC is disabled 00:04:17.922 EAL: No shared files mode enabled, IPC is disabled 00:04:17.922 EAL: No shared files mode enabled, IPC is disabled 00:04:17.922 00:04:17.922 real 0m8.806s 00:04:17.922 user 0m7.720s 00:04:17.922 sys 0m0.914s 00:04:17.922 ************************************ 00:04:17.922 END TEST env_vtophys 00:04:17.922 ************************************ 00:04:17.922 15:12:04 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:17.922 15:12:04 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:17.922 15:12:04 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:17.922 15:12:04 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:17.922 15:12:04 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:17.922 15:12:04 env -- common/autotest_common.sh@10 -- # set +x 00:04:17.922 ************************************ 00:04:17.922 START TEST env_pci 00:04:17.922 ************************************ 00:04:17.922 15:12:04 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:17.922 00:04:17.922 00:04:17.922 CUnit - A unit testing framework for C - Version 2.1-3 00:04:17.922 http://cunit.sourceforge.net/ 00:04:17.922 00:04:17.922 00:04:17.922 Suite: pci 00:04:17.922 Test: pci_hook ...[2024-11-20 15:12:04.195883] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 56631 has claimed it 00:04:17.922 passed 00:04:17.922 00:04:17.922 Run Summary: Type Total Ran Passed Failed Inactive 00:04:17.922 suites 1 1 n/a 0 0 00:04:17.922 tests 1 1 1 0 0 00:04:17.922 asserts 25 25 25 0 n/a 00:04:17.922 00:04:17.922 Elapsed time = 0.006 seconds 00:04:17.922 EAL: Cannot find device (10000:00:01.0) 00:04:17.922 EAL: Failed to attach device on primary process 00:04:17.922 00:04:17.922 real 0m0.108s 00:04:17.922 user 0m0.038s 00:04:17.922 sys 0m0.069s 00:04:17.922 ************************************ 00:04:17.923 END TEST env_pci 00:04:17.923 ************************************ 00:04:17.923 15:12:04 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:17.923 15:12:04 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:17.923 15:12:04 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:17.923 15:12:04 env -- env/env.sh@15 -- # uname 00:04:17.923 15:12:04 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:17.923 15:12:04 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:17.923 15:12:04 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:17.923 15:12:04 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:04:17.923 15:12:04 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:17.923 15:12:04 env -- common/autotest_common.sh@10 -- # set +x 00:04:17.923 ************************************ 00:04:17.923 START TEST env_dpdk_post_init 00:04:17.923 ************************************ 00:04:17.923 15:12:04 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:18.181 EAL: Detected CPU lcores: 10 00:04:18.181 EAL: Detected NUMA nodes: 1 00:04:18.181 EAL: Detected shared linkage of DPDK 00:04:18.181 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:18.181 EAL: Selected IOVA mode 'PA' 00:04:18.181 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:18.181 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:04:18.181 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:04:18.181 Starting DPDK initialization... 00:04:18.181 Starting SPDK post initialization... 00:04:18.181 SPDK NVMe probe 00:04:18.181 Attaching to 0000:00:10.0 00:04:18.181 Attaching to 0000:00:11.0 00:04:18.181 Attached to 0000:00:10.0 00:04:18.181 Attached to 0000:00:11.0 00:04:18.181 Cleaning up... 00:04:18.181 ************************************ 00:04:18.181 END TEST env_dpdk_post_init 00:04:18.181 ************************************ 00:04:18.181 00:04:18.181 real 0m0.303s 00:04:18.181 user 0m0.103s 00:04:18.181 sys 0m0.101s 00:04:18.181 15:12:04 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:18.181 15:12:04 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:18.440 15:12:04 env -- env/env.sh@26 -- # uname 00:04:18.440 15:12:04 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:18.440 15:12:04 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:18.440 15:12:04 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:18.440 15:12:04 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:18.440 15:12:04 env -- common/autotest_common.sh@10 -- # set +x 00:04:18.440 ************************************ 00:04:18.440 START TEST env_mem_callbacks 00:04:18.440 ************************************ 00:04:18.440 15:12:04 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:18.440 EAL: Detected CPU lcores: 10 00:04:18.440 EAL: Detected NUMA nodes: 1 00:04:18.440 EAL: Detected shared linkage of DPDK 00:04:18.440 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:18.440 EAL: Selected IOVA mode 'PA' 00:04:18.440 00:04:18.440 00:04:18.440 CUnit - A unit testing framework for C - Version 2.1-3 00:04:18.440 http://cunit.sourceforge.net/ 00:04:18.440 00:04:18.440 00:04:18.440 Suite: memory 00:04:18.440 Test: test ... 00:04:18.440 register 0x200000200000 2097152 00:04:18.440 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:18.440 malloc 3145728 00:04:18.440 register 0x200000400000 4194304 00:04:18.440 buf 0x2000004fffc0 len 3145728 PASSED 00:04:18.440 malloc 64 00:04:18.440 buf 0x2000004ffec0 len 64 PASSED 00:04:18.440 malloc 4194304 00:04:18.699 register 0x200000800000 6291456 00:04:18.699 buf 0x2000009fffc0 len 4194304 PASSED 00:04:18.699 free 0x2000004fffc0 3145728 00:04:18.699 free 0x2000004ffec0 64 00:04:18.699 unregister 0x200000400000 4194304 PASSED 00:04:18.699 free 0x2000009fffc0 4194304 00:04:18.699 unregister 0x200000800000 6291456 PASSED 00:04:18.699 malloc 8388608 00:04:18.699 register 0x200000400000 10485760 00:04:18.699 buf 0x2000005fffc0 len 8388608 PASSED 00:04:18.699 free 0x2000005fffc0 8388608 00:04:18.699 unregister 0x200000400000 10485760 PASSED 00:04:18.699 passed 00:04:18.699 00:04:18.699 Run Summary: Type Total Ran Passed Failed Inactive 00:04:18.699 suites 1 1 n/a 0 0 00:04:18.699 tests 1 1 1 0 0 00:04:18.699 asserts 15 15 15 0 n/a 00:04:18.699 00:04:18.699 Elapsed time = 0.084 seconds 00:04:18.699 00:04:18.699 real 0m0.301s 00:04:18.699 user 0m0.109s 00:04:18.699 sys 0m0.089s 00:04:18.699 15:12:05 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:18.699 15:12:05 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:18.699 ************************************ 00:04:18.699 END TEST env_mem_callbacks 00:04:18.699 ************************************ 00:04:18.699 00:04:18.699 real 0m10.467s 00:04:18.699 user 0m8.495s 00:04:18.699 sys 0m1.584s 00:04:18.699 ************************************ 00:04:18.699 END TEST env 00:04:18.699 ************************************ 00:04:18.699 15:12:05 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:18.699 15:12:05 env -- common/autotest_common.sh@10 -- # set +x 00:04:18.699 15:12:05 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:18.699 15:12:05 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:18.699 15:12:05 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:18.699 15:12:05 -- common/autotest_common.sh@10 -- # set +x 00:04:18.699 ************************************ 00:04:18.699 START TEST rpc 00:04:18.699 ************************************ 00:04:18.699 15:12:05 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:18.958 * Looking for test storage... 00:04:18.958 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:18.958 15:12:05 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:18.958 15:12:05 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:18.958 15:12:05 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:18.958 15:12:05 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:18.958 15:12:05 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:18.958 15:12:05 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:18.958 15:12:05 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:18.958 15:12:05 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:18.958 15:12:05 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:18.958 15:12:05 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:18.958 15:12:05 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:18.958 15:12:05 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:18.958 15:12:05 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:18.958 15:12:05 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:18.958 15:12:05 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:18.958 15:12:05 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:18.958 15:12:05 rpc -- scripts/common.sh@345 -- # : 1 00:04:18.958 15:12:05 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:18.958 15:12:05 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:18.958 15:12:05 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:18.958 15:12:05 rpc -- scripts/common.sh@353 -- # local d=1 00:04:18.958 15:12:05 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:18.958 15:12:05 rpc -- scripts/common.sh@355 -- # echo 1 00:04:18.958 15:12:05 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:18.958 15:12:05 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:18.958 15:12:05 rpc -- scripts/common.sh@353 -- # local d=2 00:04:18.958 15:12:05 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:18.958 15:12:05 rpc -- scripts/common.sh@355 -- # echo 2 00:04:18.958 15:12:05 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:18.958 15:12:05 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:18.958 15:12:05 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:18.958 15:12:05 rpc -- scripts/common.sh@368 -- # return 0 00:04:18.958 15:12:05 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:18.958 15:12:05 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:18.958 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:18.958 --rc genhtml_branch_coverage=1 00:04:18.958 --rc genhtml_function_coverage=1 00:04:18.958 --rc genhtml_legend=1 00:04:18.958 --rc geninfo_all_blocks=1 00:04:18.958 --rc geninfo_unexecuted_blocks=1 00:04:18.958 00:04:18.958 ' 00:04:18.958 15:12:05 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:18.958 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:18.958 --rc genhtml_branch_coverage=1 00:04:18.958 --rc genhtml_function_coverage=1 00:04:18.958 --rc genhtml_legend=1 00:04:18.958 --rc geninfo_all_blocks=1 00:04:18.958 --rc geninfo_unexecuted_blocks=1 00:04:18.958 00:04:18.958 ' 00:04:18.958 15:12:05 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:18.959 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:18.959 --rc genhtml_branch_coverage=1 00:04:18.959 --rc genhtml_function_coverage=1 00:04:18.959 --rc genhtml_legend=1 00:04:18.959 --rc geninfo_all_blocks=1 00:04:18.959 --rc geninfo_unexecuted_blocks=1 00:04:18.959 00:04:18.959 ' 00:04:18.959 15:12:05 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:18.959 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:18.959 --rc genhtml_branch_coverage=1 00:04:18.959 --rc genhtml_function_coverage=1 00:04:18.959 --rc genhtml_legend=1 00:04:18.959 --rc geninfo_all_blocks=1 00:04:18.959 --rc geninfo_unexecuted_blocks=1 00:04:18.959 00:04:18.959 ' 00:04:18.959 15:12:05 rpc -- rpc/rpc.sh@65 -- # spdk_pid=56758 00:04:18.959 15:12:05 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:04:18.959 15:12:05 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:18.959 15:12:05 rpc -- rpc/rpc.sh@67 -- # waitforlisten 56758 00:04:18.959 15:12:05 rpc -- common/autotest_common.sh@835 -- # '[' -z 56758 ']' 00:04:18.959 15:12:05 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:18.959 15:12:05 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:18.959 15:12:05 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:18.959 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:18.959 15:12:05 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:18.959 15:12:05 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:19.218 [2024-11-20 15:12:05.534255] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:04:19.218 [2024-11-20 15:12:05.534592] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56758 ] 00:04:19.477 [2024-11-20 15:12:05.722180] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:19.477 [2024-11-20 15:12:05.849563] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:19.477 [2024-11-20 15:12:05.849825] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 56758' to capture a snapshot of events at runtime. 00:04:19.477 [2024-11-20 15:12:05.850042] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:19.477 [2024-11-20 15:12:05.850143] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:19.477 [2024-11-20 15:12:05.850216] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid56758 for offline analysis/debug. 00:04:19.477 [2024-11-20 15:12:05.851555] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:20.412 15:12:06 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:20.412 15:12:06 rpc -- common/autotest_common.sh@868 -- # return 0 00:04:20.412 15:12:06 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:20.412 15:12:06 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:20.412 15:12:06 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:20.413 15:12:06 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:20.413 15:12:06 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:20.413 15:12:06 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:20.413 15:12:06 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:20.413 ************************************ 00:04:20.413 START TEST rpc_integrity 00:04:20.413 ************************************ 00:04:20.413 15:12:06 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:20.413 15:12:06 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:20.413 15:12:06 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:20.413 15:12:06 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:20.413 15:12:06 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:20.413 15:12:06 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:20.413 15:12:06 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:20.413 15:12:06 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:20.413 15:12:06 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:20.413 15:12:06 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:20.413 15:12:06 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:20.413 15:12:06 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:20.413 15:12:06 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:20.413 15:12:06 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:20.413 15:12:06 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:20.413 15:12:06 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:20.413 15:12:06 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:20.413 15:12:06 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:20.413 { 00:04:20.413 "name": "Malloc0", 00:04:20.413 "aliases": [ 00:04:20.413 "9df47308-dae3-40b4-95f8-39fe2f1304d4" 00:04:20.413 ], 00:04:20.413 "product_name": "Malloc disk", 00:04:20.413 "block_size": 512, 00:04:20.413 "num_blocks": 16384, 00:04:20.413 "uuid": "9df47308-dae3-40b4-95f8-39fe2f1304d4", 00:04:20.413 "assigned_rate_limits": { 00:04:20.413 "rw_ios_per_sec": 0, 00:04:20.413 "rw_mbytes_per_sec": 0, 00:04:20.413 "r_mbytes_per_sec": 0, 00:04:20.413 "w_mbytes_per_sec": 0 00:04:20.413 }, 00:04:20.413 "claimed": false, 00:04:20.413 "zoned": false, 00:04:20.413 "supported_io_types": { 00:04:20.413 "read": true, 00:04:20.413 "write": true, 00:04:20.413 "unmap": true, 00:04:20.413 "flush": true, 00:04:20.413 "reset": true, 00:04:20.413 "nvme_admin": false, 00:04:20.413 "nvme_io": false, 00:04:20.413 "nvme_io_md": false, 00:04:20.413 "write_zeroes": true, 00:04:20.413 "zcopy": true, 00:04:20.413 "get_zone_info": false, 00:04:20.413 "zone_management": false, 00:04:20.413 "zone_append": false, 00:04:20.413 "compare": false, 00:04:20.413 "compare_and_write": false, 00:04:20.413 "abort": true, 00:04:20.413 "seek_hole": false, 00:04:20.413 "seek_data": false, 00:04:20.413 "copy": true, 00:04:20.413 "nvme_iov_md": false 00:04:20.413 }, 00:04:20.413 "memory_domains": [ 00:04:20.413 { 00:04:20.413 "dma_device_id": "system", 00:04:20.413 "dma_device_type": 1 00:04:20.413 }, 00:04:20.413 { 00:04:20.413 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:20.413 "dma_device_type": 2 00:04:20.413 } 00:04:20.413 ], 00:04:20.413 "driver_specific": {} 00:04:20.413 } 00:04:20.413 ]' 00:04:20.413 15:12:06 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:20.672 15:12:06 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:20.672 15:12:06 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:20.672 15:12:06 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:20.672 15:12:06 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:20.672 [2024-11-20 15:12:06.925136] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:20.672 [2024-11-20 15:12:06.925214] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:20.672 [2024-11-20 15:12:06.925252] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:04:20.672 [2024-11-20 15:12:06.925272] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:20.672 [2024-11-20 15:12:06.927984] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:20.672 [2024-11-20 15:12:06.928040] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:20.672 Passthru0 00:04:20.672 15:12:06 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:20.672 15:12:06 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:20.672 15:12:06 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:20.672 15:12:06 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:20.672 15:12:06 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:20.672 15:12:06 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:20.672 { 00:04:20.672 "name": "Malloc0", 00:04:20.672 "aliases": [ 00:04:20.672 "9df47308-dae3-40b4-95f8-39fe2f1304d4" 00:04:20.672 ], 00:04:20.672 "product_name": "Malloc disk", 00:04:20.672 "block_size": 512, 00:04:20.672 "num_blocks": 16384, 00:04:20.672 "uuid": "9df47308-dae3-40b4-95f8-39fe2f1304d4", 00:04:20.672 "assigned_rate_limits": { 00:04:20.672 "rw_ios_per_sec": 0, 00:04:20.672 "rw_mbytes_per_sec": 0, 00:04:20.672 "r_mbytes_per_sec": 0, 00:04:20.672 "w_mbytes_per_sec": 0 00:04:20.672 }, 00:04:20.672 "claimed": true, 00:04:20.672 "claim_type": "exclusive_write", 00:04:20.672 "zoned": false, 00:04:20.672 "supported_io_types": { 00:04:20.672 "read": true, 00:04:20.672 "write": true, 00:04:20.672 "unmap": true, 00:04:20.672 "flush": true, 00:04:20.672 "reset": true, 00:04:20.672 "nvme_admin": false, 00:04:20.672 "nvme_io": false, 00:04:20.672 "nvme_io_md": false, 00:04:20.672 "write_zeroes": true, 00:04:20.672 "zcopy": true, 00:04:20.672 "get_zone_info": false, 00:04:20.672 "zone_management": false, 00:04:20.672 "zone_append": false, 00:04:20.672 "compare": false, 00:04:20.672 "compare_and_write": false, 00:04:20.672 "abort": true, 00:04:20.672 "seek_hole": false, 00:04:20.672 "seek_data": false, 00:04:20.672 "copy": true, 00:04:20.672 "nvme_iov_md": false 00:04:20.672 }, 00:04:20.672 "memory_domains": [ 00:04:20.672 { 00:04:20.672 "dma_device_id": "system", 00:04:20.672 "dma_device_type": 1 00:04:20.672 }, 00:04:20.672 { 00:04:20.672 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:20.672 "dma_device_type": 2 00:04:20.672 } 00:04:20.672 ], 00:04:20.672 "driver_specific": {} 00:04:20.672 }, 00:04:20.672 { 00:04:20.672 "name": "Passthru0", 00:04:20.672 "aliases": [ 00:04:20.672 "54f47ca5-3067-51d0-a27f-c6b50252b7a8" 00:04:20.672 ], 00:04:20.672 "product_name": "passthru", 00:04:20.672 "block_size": 512, 00:04:20.672 "num_blocks": 16384, 00:04:20.672 "uuid": "54f47ca5-3067-51d0-a27f-c6b50252b7a8", 00:04:20.672 "assigned_rate_limits": { 00:04:20.672 "rw_ios_per_sec": 0, 00:04:20.672 "rw_mbytes_per_sec": 0, 00:04:20.672 "r_mbytes_per_sec": 0, 00:04:20.672 "w_mbytes_per_sec": 0 00:04:20.672 }, 00:04:20.672 "claimed": false, 00:04:20.672 "zoned": false, 00:04:20.672 "supported_io_types": { 00:04:20.672 "read": true, 00:04:20.672 "write": true, 00:04:20.672 "unmap": true, 00:04:20.672 "flush": true, 00:04:20.672 "reset": true, 00:04:20.672 "nvme_admin": false, 00:04:20.672 "nvme_io": false, 00:04:20.672 "nvme_io_md": false, 00:04:20.672 "write_zeroes": true, 00:04:20.672 "zcopy": true, 00:04:20.672 "get_zone_info": false, 00:04:20.672 "zone_management": false, 00:04:20.672 "zone_append": false, 00:04:20.672 "compare": false, 00:04:20.672 "compare_and_write": false, 00:04:20.672 "abort": true, 00:04:20.672 "seek_hole": false, 00:04:20.672 "seek_data": false, 00:04:20.672 "copy": true, 00:04:20.672 "nvme_iov_md": false 00:04:20.672 }, 00:04:20.672 "memory_domains": [ 00:04:20.672 { 00:04:20.672 "dma_device_id": "system", 00:04:20.673 "dma_device_type": 1 00:04:20.673 }, 00:04:20.673 { 00:04:20.673 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:20.673 "dma_device_type": 2 00:04:20.673 } 00:04:20.673 ], 00:04:20.673 "driver_specific": { 00:04:20.673 "passthru": { 00:04:20.673 "name": "Passthru0", 00:04:20.673 "base_bdev_name": "Malloc0" 00:04:20.673 } 00:04:20.673 } 00:04:20.673 } 00:04:20.673 ]' 00:04:20.673 15:12:06 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:20.673 15:12:07 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:20.673 15:12:07 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:20.673 15:12:07 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:20.673 15:12:07 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:20.673 15:12:07 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:20.673 15:12:07 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:20.673 15:12:07 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:20.673 15:12:07 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:20.673 15:12:07 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:20.673 15:12:07 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:20.673 15:12:07 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:20.673 15:12:07 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:20.673 15:12:07 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:20.673 15:12:07 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:20.673 15:12:07 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:20.673 ************************************ 00:04:20.673 END TEST rpc_integrity 00:04:20.673 ************************************ 00:04:20.673 15:12:07 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:20.673 00:04:20.673 real 0m0.356s 00:04:20.673 user 0m0.191s 00:04:20.673 sys 0m0.055s 00:04:20.673 15:12:07 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:20.673 15:12:07 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:20.932 15:12:07 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:20.932 15:12:07 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:20.932 15:12:07 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:20.932 15:12:07 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:20.932 ************************************ 00:04:20.932 START TEST rpc_plugins 00:04:20.932 ************************************ 00:04:20.932 15:12:07 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:04:20.932 15:12:07 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:20.932 15:12:07 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:20.932 15:12:07 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:20.932 15:12:07 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:20.932 15:12:07 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:20.932 15:12:07 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:20.932 15:12:07 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:20.932 15:12:07 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:20.932 15:12:07 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:20.932 15:12:07 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:20.932 { 00:04:20.932 "name": "Malloc1", 00:04:20.932 "aliases": [ 00:04:20.932 "499de3b4-40f8-4e4e-b217-e815ab0bd522" 00:04:20.932 ], 00:04:20.932 "product_name": "Malloc disk", 00:04:20.932 "block_size": 4096, 00:04:20.932 "num_blocks": 256, 00:04:20.932 "uuid": "499de3b4-40f8-4e4e-b217-e815ab0bd522", 00:04:20.932 "assigned_rate_limits": { 00:04:20.932 "rw_ios_per_sec": 0, 00:04:20.932 "rw_mbytes_per_sec": 0, 00:04:20.932 "r_mbytes_per_sec": 0, 00:04:20.932 "w_mbytes_per_sec": 0 00:04:20.932 }, 00:04:20.932 "claimed": false, 00:04:20.932 "zoned": false, 00:04:20.932 "supported_io_types": { 00:04:20.932 "read": true, 00:04:20.932 "write": true, 00:04:20.932 "unmap": true, 00:04:20.932 "flush": true, 00:04:20.932 "reset": true, 00:04:20.932 "nvme_admin": false, 00:04:20.932 "nvme_io": false, 00:04:20.932 "nvme_io_md": false, 00:04:20.932 "write_zeroes": true, 00:04:20.932 "zcopy": true, 00:04:20.932 "get_zone_info": false, 00:04:20.932 "zone_management": false, 00:04:20.932 "zone_append": false, 00:04:20.932 "compare": false, 00:04:20.932 "compare_and_write": false, 00:04:20.932 "abort": true, 00:04:20.932 "seek_hole": false, 00:04:20.932 "seek_data": false, 00:04:20.932 "copy": true, 00:04:20.932 "nvme_iov_md": false 00:04:20.932 }, 00:04:20.932 "memory_domains": [ 00:04:20.932 { 00:04:20.932 "dma_device_id": "system", 00:04:20.932 "dma_device_type": 1 00:04:20.932 }, 00:04:20.932 { 00:04:20.932 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:20.932 "dma_device_type": 2 00:04:20.932 } 00:04:20.932 ], 00:04:20.932 "driver_specific": {} 00:04:20.932 } 00:04:20.932 ]' 00:04:20.932 15:12:07 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:20.932 15:12:07 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:20.932 15:12:07 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:20.932 15:12:07 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:20.932 15:12:07 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:20.932 15:12:07 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:20.932 15:12:07 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:20.932 15:12:07 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:20.932 15:12:07 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:20.932 15:12:07 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:20.932 15:12:07 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:20.932 15:12:07 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:20.932 ************************************ 00:04:20.932 END TEST rpc_plugins 00:04:20.932 ************************************ 00:04:20.932 15:12:07 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:20.932 00:04:20.932 real 0m0.164s 00:04:20.932 user 0m0.089s 00:04:20.932 sys 0m0.031s 00:04:20.932 15:12:07 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:20.932 15:12:07 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:20.932 15:12:07 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:20.932 15:12:07 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:20.932 15:12:07 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:20.932 15:12:07 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:20.932 ************************************ 00:04:20.932 START TEST rpc_trace_cmd_test 00:04:20.932 ************************************ 00:04:20.932 15:12:07 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:04:20.932 15:12:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:20.932 15:12:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:21.190 15:12:07 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:21.190 15:12:07 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:21.190 15:12:07 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:21.190 15:12:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:21.190 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid56758", 00:04:21.190 "tpoint_group_mask": "0x8", 00:04:21.190 "iscsi_conn": { 00:04:21.190 "mask": "0x2", 00:04:21.190 "tpoint_mask": "0x0" 00:04:21.190 }, 00:04:21.190 "scsi": { 00:04:21.190 "mask": "0x4", 00:04:21.190 "tpoint_mask": "0x0" 00:04:21.190 }, 00:04:21.190 "bdev": { 00:04:21.190 "mask": "0x8", 00:04:21.190 "tpoint_mask": "0xffffffffffffffff" 00:04:21.190 }, 00:04:21.190 "nvmf_rdma": { 00:04:21.190 "mask": "0x10", 00:04:21.190 "tpoint_mask": "0x0" 00:04:21.190 }, 00:04:21.190 "nvmf_tcp": { 00:04:21.190 "mask": "0x20", 00:04:21.190 "tpoint_mask": "0x0" 00:04:21.190 }, 00:04:21.190 "ftl": { 00:04:21.190 "mask": "0x40", 00:04:21.190 "tpoint_mask": "0x0" 00:04:21.190 }, 00:04:21.190 "blobfs": { 00:04:21.190 "mask": "0x80", 00:04:21.190 "tpoint_mask": "0x0" 00:04:21.190 }, 00:04:21.190 "dsa": { 00:04:21.190 "mask": "0x200", 00:04:21.190 "tpoint_mask": "0x0" 00:04:21.190 }, 00:04:21.190 "thread": { 00:04:21.190 "mask": "0x400", 00:04:21.190 "tpoint_mask": "0x0" 00:04:21.190 }, 00:04:21.190 "nvme_pcie": { 00:04:21.190 "mask": "0x800", 00:04:21.190 "tpoint_mask": "0x0" 00:04:21.190 }, 00:04:21.190 "iaa": { 00:04:21.190 "mask": "0x1000", 00:04:21.190 "tpoint_mask": "0x0" 00:04:21.190 }, 00:04:21.190 "nvme_tcp": { 00:04:21.190 "mask": "0x2000", 00:04:21.190 "tpoint_mask": "0x0" 00:04:21.190 }, 00:04:21.190 "bdev_nvme": { 00:04:21.190 "mask": "0x4000", 00:04:21.190 "tpoint_mask": "0x0" 00:04:21.190 }, 00:04:21.190 "sock": { 00:04:21.190 "mask": "0x8000", 00:04:21.190 "tpoint_mask": "0x0" 00:04:21.190 }, 00:04:21.190 "blob": { 00:04:21.190 "mask": "0x10000", 00:04:21.190 "tpoint_mask": "0x0" 00:04:21.190 }, 00:04:21.190 "bdev_raid": { 00:04:21.190 "mask": "0x20000", 00:04:21.190 "tpoint_mask": "0x0" 00:04:21.190 }, 00:04:21.190 "scheduler": { 00:04:21.190 "mask": "0x40000", 00:04:21.190 "tpoint_mask": "0x0" 00:04:21.190 } 00:04:21.190 }' 00:04:21.190 15:12:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:21.190 15:12:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:04:21.190 15:12:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:21.190 15:12:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:21.190 15:12:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:21.190 15:12:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:21.190 15:12:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:21.190 15:12:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:21.190 15:12:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:21.190 ************************************ 00:04:21.190 END TEST rpc_trace_cmd_test 00:04:21.190 ************************************ 00:04:21.190 15:12:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:21.190 00:04:21.190 real 0m0.245s 00:04:21.190 user 0m0.193s 00:04:21.190 sys 0m0.040s 00:04:21.190 15:12:07 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:21.190 15:12:07 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:21.491 15:12:07 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:21.491 15:12:07 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:21.491 15:12:07 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:21.491 15:12:07 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:21.491 15:12:07 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:21.491 15:12:07 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:21.491 ************************************ 00:04:21.491 START TEST rpc_daemon_integrity 00:04:21.491 ************************************ 00:04:21.491 15:12:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:21.491 15:12:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:21.491 15:12:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:21.491 15:12:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:21.491 15:12:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:21.491 15:12:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:21.491 15:12:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:21.491 15:12:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:21.491 15:12:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:21.491 15:12:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:21.491 15:12:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:21.491 15:12:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:21.491 15:12:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:21.491 15:12:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:21.491 15:12:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:21.491 15:12:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:21.491 15:12:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:21.491 15:12:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:21.491 { 00:04:21.491 "name": "Malloc2", 00:04:21.491 "aliases": [ 00:04:21.491 "9ee1465f-fe49-4487-a75f-98423da3e712" 00:04:21.491 ], 00:04:21.492 "product_name": "Malloc disk", 00:04:21.492 "block_size": 512, 00:04:21.492 "num_blocks": 16384, 00:04:21.492 "uuid": "9ee1465f-fe49-4487-a75f-98423da3e712", 00:04:21.492 "assigned_rate_limits": { 00:04:21.492 "rw_ios_per_sec": 0, 00:04:21.492 "rw_mbytes_per_sec": 0, 00:04:21.492 "r_mbytes_per_sec": 0, 00:04:21.492 "w_mbytes_per_sec": 0 00:04:21.492 }, 00:04:21.492 "claimed": false, 00:04:21.492 "zoned": false, 00:04:21.492 "supported_io_types": { 00:04:21.492 "read": true, 00:04:21.492 "write": true, 00:04:21.492 "unmap": true, 00:04:21.492 "flush": true, 00:04:21.492 "reset": true, 00:04:21.492 "nvme_admin": false, 00:04:21.492 "nvme_io": false, 00:04:21.492 "nvme_io_md": false, 00:04:21.492 "write_zeroes": true, 00:04:21.492 "zcopy": true, 00:04:21.492 "get_zone_info": false, 00:04:21.492 "zone_management": false, 00:04:21.492 "zone_append": false, 00:04:21.492 "compare": false, 00:04:21.492 "compare_and_write": false, 00:04:21.492 "abort": true, 00:04:21.492 "seek_hole": false, 00:04:21.492 "seek_data": false, 00:04:21.492 "copy": true, 00:04:21.492 "nvme_iov_md": false 00:04:21.492 }, 00:04:21.492 "memory_domains": [ 00:04:21.492 { 00:04:21.492 "dma_device_id": "system", 00:04:21.492 "dma_device_type": 1 00:04:21.492 }, 00:04:21.492 { 00:04:21.492 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:21.492 "dma_device_type": 2 00:04:21.492 } 00:04:21.492 ], 00:04:21.492 "driver_specific": {} 00:04:21.492 } 00:04:21.492 ]' 00:04:21.492 15:12:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:21.492 15:12:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:21.492 15:12:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:21.492 15:12:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:21.492 15:12:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:21.492 [2024-11-20 15:12:07.860561] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:21.492 [2024-11-20 15:12:07.860642] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:21.492 [2024-11-20 15:12:07.860680] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:04:21.492 [2024-11-20 15:12:07.860696] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:21.492 [2024-11-20 15:12:07.863374] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:21.492 [2024-11-20 15:12:07.863430] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:21.492 Passthru0 00:04:21.492 15:12:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:21.492 15:12:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:21.492 15:12:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:21.492 15:12:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:21.492 15:12:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:21.492 15:12:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:21.492 { 00:04:21.492 "name": "Malloc2", 00:04:21.492 "aliases": [ 00:04:21.492 "9ee1465f-fe49-4487-a75f-98423da3e712" 00:04:21.492 ], 00:04:21.492 "product_name": "Malloc disk", 00:04:21.492 "block_size": 512, 00:04:21.492 "num_blocks": 16384, 00:04:21.492 "uuid": "9ee1465f-fe49-4487-a75f-98423da3e712", 00:04:21.492 "assigned_rate_limits": { 00:04:21.492 "rw_ios_per_sec": 0, 00:04:21.492 "rw_mbytes_per_sec": 0, 00:04:21.492 "r_mbytes_per_sec": 0, 00:04:21.492 "w_mbytes_per_sec": 0 00:04:21.492 }, 00:04:21.492 "claimed": true, 00:04:21.492 "claim_type": "exclusive_write", 00:04:21.492 "zoned": false, 00:04:21.492 "supported_io_types": { 00:04:21.492 "read": true, 00:04:21.492 "write": true, 00:04:21.492 "unmap": true, 00:04:21.492 "flush": true, 00:04:21.492 "reset": true, 00:04:21.492 "nvme_admin": false, 00:04:21.492 "nvme_io": false, 00:04:21.492 "nvme_io_md": false, 00:04:21.492 "write_zeroes": true, 00:04:21.492 "zcopy": true, 00:04:21.492 "get_zone_info": false, 00:04:21.492 "zone_management": false, 00:04:21.492 "zone_append": false, 00:04:21.492 "compare": false, 00:04:21.492 "compare_and_write": false, 00:04:21.492 "abort": true, 00:04:21.492 "seek_hole": false, 00:04:21.492 "seek_data": false, 00:04:21.492 "copy": true, 00:04:21.492 "nvme_iov_md": false 00:04:21.492 }, 00:04:21.492 "memory_domains": [ 00:04:21.492 { 00:04:21.492 "dma_device_id": "system", 00:04:21.492 "dma_device_type": 1 00:04:21.492 }, 00:04:21.492 { 00:04:21.492 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:21.492 "dma_device_type": 2 00:04:21.492 } 00:04:21.492 ], 00:04:21.492 "driver_specific": {} 00:04:21.492 }, 00:04:21.492 { 00:04:21.492 "name": "Passthru0", 00:04:21.492 "aliases": [ 00:04:21.492 "95f07921-659f-5e39-bf02-cf25a49eafbe" 00:04:21.492 ], 00:04:21.492 "product_name": "passthru", 00:04:21.492 "block_size": 512, 00:04:21.492 "num_blocks": 16384, 00:04:21.492 "uuid": "95f07921-659f-5e39-bf02-cf25a49eafbe", 00:04:21.492 "assigned_rate_limits": { 00:04:21.492 "rw_ios_per_sec": 0, 00:04:21.492 "rw_mbytes_per_sec": 0, 00:04:21.492 "r_mbytes_per_sec": 0, 00:04:21.492 "w_mbytes_per_sec": 0 00:04:21.492 }, 00:04:21.492 "claimed": false, 00:04:21.492 "zoned": false, 00:04:21.492 "supported_io_types": { 00:04:21.492 "read": true, 00:04:21.492 "write": true, 00:04:21.492 "unmap": true, 00:04:21.492 "flush": true, 00:04:21.492 "reset": true, 00:04:21.492 "nvme_admin": false, 00:04:21.492 "nvme_io": false, 00:04:21.492 "nvme_io_md": false, 00:04:21.492 "write_zeroes": true, 00:04:21.492 "zcopy": true, 00:04:21.492 "get_zone_info": false, 00:04:21.492 "zone_management": false, 00:04:21.492 "zone_append": false, 00:04:21.492 "compare": false, 00:04:21.492 "compare_and_write": false, 00:04:21.492 "abort": true, 00:04:21.492 "seek_hole": false, 00:04:21.492 "seek_data": false, 00:04:21.492 "copy": true, 00:04:21.492 "nvme_iov_md": false 00:04:21.492 }, 00:04:21.492 "memory_domains": [ 00:04:21.492 { 00:04:21.492 "dma_device_id": "system", 00:04:21.492 "dma_device_type": 1 00:04:21.492 }, 00:04:21.492 { 00:04:21.492 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:21.492 "dma_device_type": 2 00:04:21.492 } 00:04:21.492 ], 00:04:21.492 "driver_specific": { 00:04:21.492 "passthru": { 00:04:21.492 "name": "Passthru0", 00:04:21.492 "base_bdev_name": "Malloc2" 00:04:21.492 } 00:04:21.492 } 00:04:21.492 } 00:04:21.492 ]' 00:04:21.492 15:12:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:21.492 15:12:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:21.492 15:12:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:21.492 15:12:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:21.492 15:12:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:21.492 15:12:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:21.492 15:12:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:21.492 15:12:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:21.492 15:12:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:21.750 15:12:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:21.750 15:12:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:21.750 15:12:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:21.750 15:12:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:21.750 15:12:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:21.750 15:12:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:21.750 15:12:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:21.750 ************************************ 00:04:21.750 END TEST rpc_daemon_integrity 00:04:21.750 ************************************ 00:04:21.750 15:12:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:21.750 00:04:21.750 real 0m0.334s 00:04:21.750 user 0m0.181s 00:04:21.750 sys 0m0.050s 00:04:21.750 15:12:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:21.750 15:12:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:21.750 15:12:08 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:21.750 15:12:08 rpc -- rpc/rpc.sh@84 -- # killprocess 56758 00:04:21.750 15:12:08 rpc -- common/autotest_common.sh@954 -- # '[' -z 56758 ']' 00:04:21.750 15:12:08 rpc -- common/autotest_common.sh@958 -- # kill -0 56758 00:04:21.750 15:12:08 rpc -- common/autotest_common.sh@959 -- # uname 00:04:21.750 15:12:08 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:21.750 15:12:08 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 56758 00:04:21.750 killing process with pid 56758 00:04:21.750 15:12:08 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:21.750 15:12:08 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:21.750 15:12:08 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 56758' 00:04:21.750 15:12:08 rpc -- common/autotest_common.sh@973 -- # kill 56758 00:04:21.750 15:12:08 rpc -- common/autotest_common.sh@978 -- # wait 56758 00:04:24.284 00:04:24.284 real 0m5.404s 00:04:24.284 user 0m5.864s 00:04:24.284 sys 0m0.935s 00:04:24.284 15:12:10 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:24.284 ************************************ 00:04:24.284 END TEST rpc 00:04:24.284 ************************************ 00:04:24.284 15:12:10 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:24.284 15:12:10 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:24.284 15:12:10 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:24.284 15:12:10 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:24.284 15:12:10 -- common/autotest_common.sh@10 -- # set +x 00:04:24.284 ************************************ 00:04:24.284 START TEST skip_rpc 00:04:24.284 ************************************ 00:04:24.284 15:12:10 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:24.284 * Looking for test storage... 00:04:24.284 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:24.284 15:12:10 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:24.284 15:12:10 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:24.284 15:12:10 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:24.553 15:12:10 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:24.553 15:12:10 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:24.553 15:12:10 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:24.553 15:12:10 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:24.553 15:12:10 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:24.553 15:12:10 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:24.553 15:12:10 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:24.553 15:12:10 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:24.553 15:12:10 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:24.553 15:12:10 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:24.553 15:12:10 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:24.554 15:12:10 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:24.554 15:12:10 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:24.554 15:12:10 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:24.554 15:12:10 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:24.554 15:12:10 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:24.554 15:12:10 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:24.554 15:12:10 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:24.554 15:12:10 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:24.554 15:12:10 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:24.554 15:12:10 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:24.554 15:12:10 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:24.554 15:12:10 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:24.554 15:12:10 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:24.554 15:12:10 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:24.554 15:12:10 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:24.554 15:12:10 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:24.554 15:12:10 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:24.554 15:12:10 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:24.554 15:12:10 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:24.554 15:12:10 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:24.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:24.554 --rc genhtml_branch_coverage=1 00:04:24.554 --rc genhtml_function_coverage=1 00:04:24.554 --rc genhtml_legend=1 00:04:24.554 --rc geninfo_all_blocks=1 00:04:24.554 --rc geninfo_unexecuted_blocks=1 00:04:24.554 00:04:24.554 ' 00:04:24.554 15:12:10 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:24.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:24.554 --rc genhtml_branch_coverage=1 00:04:24.554 --rc genhtml_function_coverage=1 00:04:24.554 --rc genhtml_legend=1 00:04:24.554 --rc geninfo_all_blocks=1 00:04:24.554 --rc geninfo_unexecuted_blocks=1 00:04:24.554 00:04:24.554 ' 00:04:24.554 15:12:10 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:24.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:24.554 --rc genhtml_branch_coverage=1 00:04:24.554 --rc genhtml_function_coverage=1 00:04:24.554 --rc genhtml_legend=1 00:04:24.554 --rc geninfo_all_blocks=1 00:04:24.554 --rc geninfo_unexecuted_blocks=1 00:04:24.554 00:04:24.554 ' 00:04:24.554 15:12:10 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:24.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:24.554 --rc genhtml_branch_coverage=1 00:04:24.554 --rc genhtml_function_coverage=1 00:04:24.554 --rc genhtml_legend=1 00:04:24.554 --rc geninfo_all_blocks=1 00:04:24.554 --rc geninfo_unexecuted_blocks=1 00:04:24.554 00:04:24.554 ' 00:04:24.554 15:12:10 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:24.554 15:12:10 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:24.554 15:12:10 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:24.554 15:12:10 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:24.554 15:12:10 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:24.554 15:12:10 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:24.554 ************************************ 00:04:24.554 START TEST skip_rpc 00:04:24.554 ************************************ 00:04:24.554 15:12:10 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:04:24.554 15:12:10 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=56992 00:04:24.554 15:12:10 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:24.554 15:12:10 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:24.554 15:12:10 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:24.554 [2024-11-20 15:12:10.975933] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:04:24.554 [2024-11-20 15:12:10.976259] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56992 ] 00:04:24.843 [2024-11-20 15:12:11.157604] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:24.843 [2024-11-20 15:12:11.274439] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:30.118 15:12:15 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:30.118 15:12:15 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:04:30.118 15:12:15 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:30.118 15:12:15 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:04:30.118 15:12:15 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:30.118 15:12:15 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:04:30.118 15:12:15 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:30.118 15:12:15 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:04:30.118 15:12:15 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:30.118 15:12:15 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:30.118 15:12:15 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:30.118 15:12:15 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:04:30.118 15:12:15 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:30.118 15:12:15 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:30.118 15:12:15 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:30.118 15:12:15 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:30.118 15:12:15 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 56992 00:04:30.118 15:12:15 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 56992 ']' 00:04:30.118 15:12:15 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 56992 00:04:30.118 15:12:15 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:04:30.118 15:12:15 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:30.118 15:12:15 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 56992 00:04:30.118 15:12:15 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:30.118 killing process with pid 56992 00:04:30.118 15:12:15 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:30.119 15:12:15 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 56992' 00:04:30.119 15:12:15 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 56992 00:04:30.119 15:12:15 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 56992 00:04:32.131 00:04:32.131 real 0m7.479s 00:04:32.131 user 0m6.993s 00:04:32.131 sys 0m0.402s 00:04:32.131 ************************************ 00:04:32.131 END TEST skip_rpc 00:04:32.131 ************************************ 00:04:32.131 15:12:18 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:32.131 15:12:18 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:32.131 15:12:18 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:32.131 15:12:18 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:32.131 15:12:18 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:32.131 15:12:18 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:32.131 ************************************ 00:04:32.131 START TEST skip_rpc_with_json 00:04:32.131 ************************************ 00:04:32.131 15:12:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:04:32.131 15:12:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:32.131 15:12:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=57102 00:04:32.131 15:12:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:32.131 15:12:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:32.131 15:12:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 57102 00:04:32.131 15:12:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 57102 ']' 00:04:32.131 15:12:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:32.131 15:12:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:32.131 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:32.131 15:12:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:32.131 15:12:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:32.131 15:12:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:32.131 [2024-11-20 15:12:18.529186] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:04:32.131 [2024-11-20 15:12:18.529321] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57102 ] 00:04:32.391 [2024-11-20 15:12:18.723085] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:32.391 [2024-11-20 15:12:18.839912] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:33.327 15:12:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:33.327 15:12:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:04:33.327 15:12:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:33.327 15:12:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:33.327 15:12:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:33.327 [2024-11-20 15:12:19.733261] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:33.327 request: 00:04:33.327 { 00:04:33.327 "trtype": "tcp", 00:04:33.327 "method": "nvmf_get_transports", 00:04:33.327 "req_id": 1 00:04:33.327 } 00:04:33.327 Got JSON-RPC error response 00:04:33.327 response: 00:04:33.327 { 00:04:33.327 "code": -19, 00:04:33.327 "message": "No such device" 00:04:33.327 } 00:04:33.327 15:12:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:33.327 15:12:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:33.327 15:12:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:33.327 15:12:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:33.327 [2024-11-20 15:12:19.745388] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:33.327 15:12:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:33.327 15:12:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:33.327 15:12:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:33.327 15:12:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:33.587 15:12:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:33.587 15:12:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:33.587 { 00:04:33.587 "subsystems": [ 00:04:33.587 { 00:04:33.587 "subsystem": "fsdev", 00:04:33.587 "config": [ 00:04:33.587 { 00:04:33.587 "method": "fsdev_set_opts", 00:04:33.587 "params": { 00:04:33.587 "fsdev_io_pool_size": 65535, 00:04:33.587 "fsdev_io_cache_size": 256 00:04:33.587 } 00:04:33.587 } 00:04:33.587 ] 00:04:33.587 }, 00:04:33.587 { 00:04:33.587 "subsystem": "keyring", 00:04:33.587 "config": [] 00:04:33.587 }, 00:04:33.587 { 00:04:33.587 "subsystem": "iobuf", 00:04:33.587 "config": [ 00:04:33.587 { 00:04:33.587 "method": "iobuf_set_options", 00:04:33.587 "params": { 00:04:33.587 "small_pool_count": 8192, 00:04:33.587 "large_pool_count": 1024, 00:04:33.587 "small_bufsize": 8192, 00:04:33.587 "large_bufsize": 135168, 00:04:33.587 "enable_numa": false 00:04:33.587 } 00:04:33.587 } 00:04:33.587 ] 00:04:33.587 }, 00:04:33.587 { 00:04:33.587 "subsystem": "sock", 00:04:33.587 "config": [ 00:04:33.587 { 00:04:33.587 "method": "sock_set_default_impl", 00:04:33.587 "params": { 00:04:33.587 "impl_name": "posix" 00:04:33.587 } 00:04:33.587 }, 00:04:33.587 { 00:04:33.587 "method": "sock_impl_set_options", 00:04:33.587 "params": { 00:04:33.587 "impl_name": "ssl", 00:04:33.587 "recv_buf_size": 4096, 00:04:33.587 "send_buf_size": 4096, 00:04:33.587 "enable_recv_pipe": true, 00:04:33.587 "enable_quickack": false, 00:04:33.587 "enable_placement_id": 0, 00:04:33.587 "enable_zerocopy_send_server": true, 00:04:33.587 "enable_zerocopy_send_client": false, 00:04:33.587 "zerocopy_threshold": 0, 00:04:33.587 "tls_version": 0, 00:04:33.587 "enable_ktls": false 00:04:33.587 } 00:04:33.587 }, 00:04:33.587 { 00:04:33.587 "method": "sock_impl_set_options", 00:04:33.587 "params": { 00:04:33.587 "impl_name": "posix", 00:04:33.587 "recv_buf_size": 2097152, 00:04:33.587 "send_buf_size": 2097152, 00:04:33.587 "enable_recv_pipe": true, 00:04:33.587 "enable_quickack": false, 00:04:33.587 "enable_placement_id": 0, 00:04:33.587 "enable_zerocopy_send_server": true, 00:04:33.587 "enable_zerocopy_send_client": false, 00:04:33.587 "zerocopy_threshold": 0, 00:04:33.587 "tls_version": 0, 00:04:33.587 "enable_ktls": false 00:04:33.587 } 00:04:33.587 } 00:04:33.587 ] 00:04:33.587 }, 00:04:33.587 { 00:04:33.587 "subsystem": "vmd", 00:04:33.587 "config": [] 00:04:33.587 }, 00:04:33.587 { 00:04:33.587 "subsystem": "accel", 00:04:33.587 "config": [ 00:04:33.587 { 00:04:33.587 "method": "accel_set_options", 00:04:33.587 "params": { 00:04:33.587 "small_cache_size": 128, 00:04:33.587 "large_cache_size": 16, 00:04:33.587 "task_count": 2048, 00:04:33.587 "sequence_count": 2048, 00:04:33.587 "buf_count": 2048 00:04:33.587 } 00:04:33.587 } 00:04:33.587 ] 00:04:33.587 }, 00:04:33.587 { 00:04:33.587 "subsystem": "bdev", 00:04:33.587 "config": [ 00:04:33.587 { 00:04:33.587 "method": "bdev_set_options", 00:04:33.588 "params": { 00:04:33.588 "bdev_io_pool_size": 65535, 00:04:33.588 "bdev_io_cache_size": 256, 00:04:33.588 "bdev_auto_examine": true, 00:04:33.588 "iobuf_small_cache_size": 128, 00:04:33.588 "iobuf_large_cache_size": 16 00:04:33.588 } 00:04:33.588 }, 00:04:33.588 { 00:04:33.588 "method": "bdev_raid_set_options", 00:04:33.588 "params": { 00:04:33.588 "process_window_size_kb": 1024, 00:04:33.588 "process_max_bandwidth_mb_sec": 0 00:04:33.588 } 00:04:33.588 }, 00:04:33.588 { 00:04:33.588 "method": "bdev_iscsi_set_options", 00:04:33.588 "params": { 00:04:33.588 "timeout_sec": 30 00:04:33.588 } 00:04:33.588 }, 00:04:33.588 { 00:04:33.588 "method": "bdev_nvme_set_options", 00:04:33.588 "params": { 00:04:33.588 "action_on_timeout": "none", 00:04:33.588 "timeout_us": 0, 00:04:33.588 "timeout_admin_us": 0, 00:04:33.588 "keep_alive_timeout_ms": 10000, 00:04:33.588 "arbitration_burst": 0, 00:04:33.588 "low_priority_weight": 0, 00:04:33.588 "medium_priority_weight": 0, 00:04:33.588 "high_priority_weight": 0, 00:04:33.588 "nvme_adminq_poll_period_us": 10000, 00:04:33.588 "nvme_ioq_poll_period_us": 0, 00:04:33.588 "io_queue_requests": 0, 00:04:33.588 "delay_cmd_submit": true, 00:04:33.588 "transport_retry_count": 4, 00:04:33.588 "bdev_retry_count": 3, 00:04:33.588 "transport_ack_timeout": 0, 00:04:33.588 "ctrlr_loss_timeout_sec": 0, 00:04:33.588 "reconnect_delay_sec": 0, 00:04:33.588 "fast_io_fail_timeout_sec": 0, 00:04:33.588 "disable_auto_failback": false, 00:04:33.588 "generate_uuids": false, 00:04:33.588 "transport_tos": 0, 00:04:33.588 "nvme_error_stat": false, 00:04:33.588 "rdma_srq_size": 0, 00:04:33.588 "io_path_stat": false, 00:04:33.588 "allow_accel_sequence": false, 00:04:33.588 "rdma_max_cq_size": 0, 00:04:33.588 "rdma_cm_event_timeout_ms": 0, 00:04:33.588 "dhchap_digests": [ 00:04:33.588 "sha256", 00:04:33.588 "sha384", 00:04:33.588 "sha512" 00:04:33.588 ], 00:04:33.588 "dhchap_dhgroups": [ 00:04:33.588 "null", 00:04:33.588 "ffdhe2048", 00:04:33.588 "ffdhe3072", 00:04:33.588 "ffdhe4096", 00:04:33.588 "ffdhe6144", 00:04:33.588 "ffdhe8192" 00:04:33.588 ] 00:04:33.588 } 00:04:33.588 }, 00:04:33.588 { 00:04:33.588 "method": "bdev_nvme_set_hotplug", 00:04:33.588 "params": { 00:04:33.588 "period_us": 100000, 00:04:33.588 "enable": false 00:04:33.588 } 00:04:33.588 }, 00:04:33.588 { 00:04:33.588 "method": "bdev_wait_for_examine" 00:04:33.588 } 00:04:33.588 ] 00:04:33.588 }, 00:04:33.588 { 00:04:33.588 "subsystem": "scsi", 00:04:33.588 "config": null 00:04:33.588 }, 00:04:33.588 { 00:04:33.588 "subsystem": "scheduler", 00:04:33.588 "config": [ 00:04:33.588 { 00:04:33.588 "method": "framework_set_scheduler", 00:04:33.588 "params": { 00:04:33.588 "name": "static" 00:04:33.588 } 00:04:33.588 } 00:04:33.588 ] 00:04:33.588 }, 00:04:33.588 { 00:04:33.588 "subsystem": "vhost_scsi", 00:04:33.588 "config": [] 00:04:33.588 }, 00:04:33.588 { 00:04:33.588 "subsystem": "vhost_blk", 00:04:33.588 "config": [] 00:04:33.588 }, 00:04:33.588 { 00:04:33.588 "subsystem": "ublk", 00:04:33.588 "config": [] 00:04:33.588 }, 00:04:33.588 { 00:04:33.588 "subsystem": "nbd", 00:04:33.588 "config": [] 00:04:33.588 }, 00:04:33.588 { 00:04:33.588 "subsystem": "nvmf", 00:04:33.588 "config": [ 00:04:33.588 { 00:04:33.588 "method": "nvmf_set_config", 00:04:33.588 "params": { 00:04:33.588 "discovery_filter": "match_any", 00:04:33.588 "admin_cmd_passthru": { 00:04:33.588 "identify_ctrlr": false 00:04:33.588 }, 00:04:33.588 "dhchap_digests": [ 00:04:33.588 "sha256", 00:04:33.588 "sha384", 00:04:33.588 "sha512" 00:04:33.588 ], 00:04:33.588 "dhchap_dhgroups": [ 00:04:33.588 "null", 00:04:33.588 "ffdhe2048", 00:04:33.588 "ffdhe3072", 00:04:33.588 "ffdhe4096", 00:04:33.588 "ffdhe6144", 00:04:33.588 "ffdhe8192" 00:04:33.588 ] 00:04:33.588 } 00:04:33.588 }, 00:04:33.588 { 00:04:33.588 "method": "nvmf_set_max_subsystems", 00:04:33.588 "params": { 00:04:33.588 "max_subsystems": 1024 00:04:33.588 } 00:04:33.588 }, 00:04:33.588 { 00:04:33.588 "method": "nvmf_set_crdt", 00:04:33.588 "params": { 00:04:33.588 "crdt1": 0, 00:04:33.588 "crdt2": 0, 00:04:33.588 "crdt3": 0 00:04:33.588 } 00:04:33.588 }, 00:04:33.588 { 00:04:33.588 "method": "nvmf_create_transport", 00:04:33.588 "params": { 00:04:33.588 "trtype": "TCP", 00:04:33.588 "max_queue_depth": 128, 00:04:33.588 "max_io_qpairs_per_ctrlr": 127, 00:04:33.588 "in_capsule_data_size": 4096, 00:04:33.588 "max_io_size": 131072, 00:04:33.588 "io_unit_size": 131072, 00:04:33.588 "max_aq_depth": 128, 00:04:33.588 "num_shared_buffers": 511, 00:04:33.588 "buf_cache_size": 4294967295, 00:04:33.588 "dif_insert_or_strip": false, 00:04:33.588 "zcopy": false, 00:04:33.588 "c2h_success": true, 00:04:33.588 "sock_priority": 0, 00:04:33.588 "abort_timeout_sec": 1, 00:04:33.588 "ack_timeout": 0, 00:04:33.588 "data_wr_pool_size": 0 00:04:33.588 } 00:04:33.588 } 00:04:33.588 ] 00:04:33.588 }, 00:04:33.588 { 00:04:33.588 "subsystem": "iscsi", 00:04:33.588 "config": [ 00:04:33.588 { 00:04:33.588 "method": "iscsi_set_options", 00:04:33.588 "params": { 00:04:33.588 "node_base": "iqn.2016-06.io.spdk", 00:04:33.588 "max_sessions": 128, 00:04:33.588 "max_connections_per_session": 2, 00:04:33.588 "max_queue_depth": 64, 00:04:33.588 "default_time2wait": 2, 00:04:33.588 "default_time2retain": 20, 00:04:33.588 "first_burst_length": 8192, 00:04:33.588 "immediate_data": true, 00:04:33.588 "allow_duplicated_isid": false, 00:04:33.588 "error_recovery_level": 0, 00:04:33.588 "nop_timeout": 60, 00:04:33.588 "nop_in_interval": 30, 00:04:33.588 "disable_chap": false, 00:04:33.588 "require_chap": false, 00:04:33.588 "mutual_chap": false, 00:04:33.588 "chap_group": 0, 00:04:33.588 "max_large_datain_per_connection": 64, 00:04:33.588 "max_r2t_per_connection": 4, 00:04:33.588 "pdu_pool_size": 36864, 00:04:33.588 "immediate_data_pool_size": 16384, 00:04:33.588 "data_out_pool_size": 2048 00:04:33.588 } 00:04:33.588 } 00:04:33.588 ] 00:04:33.588 } 00:04:33.588 ] 00:04:33.588 } 00:04:33.588 15:12:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:33.588 15:12:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 57102 00:04:33.588 15:12:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57102 ']' 00:04:33.588 15:12:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57102 00:04:33.588 15:12:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:33.588 15:12:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:33.588 15:12:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57102 00:04:33.588 15:12:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:33.588 15:12:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:33.588 15:12:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57102' 00:04:33.589 killing process with pid 57102 00:04:33.589 15:12:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57102 00:04:33.589 15:12:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57102 00:04:36.120 15:12:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=57158 00:04:36.120 15:12:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:36.120 15:12:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:41.509 15:12:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 57158 00:04:41.509 15:12:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57158 ']' 00:04:41.509 15:12:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57158 00:04:41.509 15:12:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:41.509 15:12:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:41.509 15:12:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57158 00:04:41.509 15:12:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:41.509 killing process with pid 57158 00:04:41.509 15:12:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:41.509 15:12:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57158' 00:04:41.509 15:12:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57158 00:04:41.509 15:12:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57158 00:04:43.412 15:12:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:43.412 15:12:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:43.412 00:04:43.412 real 0m11.419s 00:04:43.412 user 0m10.869s 00:04:43.412 sys 0m0.869s 00:04:43.412 15:12:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:43.412 15:12:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:43.412 ************************************ 00:04:43.412 END TEST skip_rpc_with_json 00:04:43.412 ************************************ 00:04:43.412 15:12:29 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:43.412 15:12:29 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:43.412 15:12:29 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:43.412 15:12:29 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:43.670 ************************************ 00:04:43.670 START TEST skip_rpc_with_delay 00:04:43.670 ************************************ 00:04:43.670 15:12:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:04:43.670 15:12:29 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:43.670 15:12:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:04:43.670 15:12:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:43.670 15:12:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:43.670 15:12:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:43.670 15:12:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:43.670 15:12:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:43.670 15:12:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:43.670 15:12:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:43.670 15:12:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:43.670 15:12:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:43.670 15:12:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:43.670 [2024-11-20 15:12:30.019707] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:43.670 15:12:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:04:43.670 15:12:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:43.670 15:12:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:43.670 15:12:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:43.670 00:04:43.670 real 0m0.184s 00:04:43.670 user 0m0.093s 00:04:43.670 sys 0m0.089s 00:04:43.671 15:12:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:43.671 15:12:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:43.671 ************************************ 00:04:43.671 END TEST skip_rpc_with_delay 00:04:43.671 ************************************ 00:04:43.671 15:12:30 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:43.671 15:12:30 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:43.671 15:12:30 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:43.671 15:12:30 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:43.671 15:12:30 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:43.671 15:12:30 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:43.928 ************************************ 00:04:43.928 START TEST exit_on_failed_rpc_init 00:04:43.928 ************************************ 00:04:43.928 15:12:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:04:43.928 15:12:30 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=57286 00:04:43.928 15:12:30 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:43.928 15:12:30 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 57286 00:04:43.928 15:12:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 57286 ']' 00:04:43.928 15:12:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:43.928 15:12:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:43.928 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:43.928 15:12:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:43.928 15:12:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:43.928 15:12:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:43.928 [2024-11-20 15:12:30.267794] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:04:43.928 [2024-11-20 15:12:30.267916] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57286 ] 00:04:44.187 [2024-11-20 15:12:30.441684] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:44.187 [2024-11-20 15:12:30.562032] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:45.125 15:12:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:45.125 15:12:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:04:45.125 15:12:31 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:45.125 15:12:31 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:45.125 15:12:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:04:45.125 15:12:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:45.125 15:12:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:45.125 15:12:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:45.125 15:12:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:45.125 15:12:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:45.125 15:12:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:45.125 15:12:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:45.125 15:12:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:45.125 15:12:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:45.125 15:12:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:45.125 [2024-11-20 15:12:31.526918] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:04:45.125 [2024-11-20 15:12:31.527034] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57304 ] 00:04:45.384 [2024-11-20 15:12:31.697301] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:45.384 [2024-11-20 15:12:31.816785] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:45.384 [2024-11-20 15:12:31.816880] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:45.384 [2024-11-20 15:12:31.816897] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:45.384 [2024-11-20 15:12:31.816911] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:45.643 15:12:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:04:45.643 15:12:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:45.643 15:12:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:04:45.643 15:12:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:04:45.643 15:12:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:04:45.644 15:12:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:45.644 15:12:32 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:45.644 15:12:32 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 57286 00:04:45.644 15:12:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 57286 ']' 00:04:45.644 15:12:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 57286 00:04:45.644 15:12:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:04:45.644 15:12:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:45.644 15:12:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57286 00:04:45.644 15:12:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:45.644 15:12:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:45.644 15:12:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57286' 00:04:45.644 killing process with pid 57286 00:04:45.644 15:12:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 57286 00:04:45.644 15:12:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 57286 00:04:48.179 00:04:48.179 real 0m4.386s 00:04:48.179 user 0m4.724s 00:04:48.179 sys 0m0.622s 00:04:48.179 15:12:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:48.179 15:12:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:48.179 ************************************ 00:04:48.179 END TEST exit_on_failed_rpc_init 00:04:48.179 ************************************ 00:04:48.179 15:12:34 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:48.179 00:04:48.179 real 0m23.982s 00:04:48.179 user 0m22.913s 00:04:48.179 sys 0m2.263s 00:04:48.179 15:12:34 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:48.179 15:12:34 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:48.179 ************************************ 00:04:48.179 END TEST skip_rpc 00:04:48.179 ************************************ 00:04:48.437 15:12:34 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:48.437 15:12:34 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:48.437 15:12:34 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:48.437 15:12:34 -- common/autotest_common.sh@10 -- # set +x 00:04:48.437 ************************************ 00:04:48.437 START TEST rpc_client 00:04:48.437 ************************************ 00:04:48.437 15:12:34 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:48.437 * Looking for test storage... 00:04:48.437 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:04:48.437 15:12:34 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:48.438 15:12:34 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:04:48.438 15:12:34 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:48.438 15:12:34 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:48.438 15:12:34 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:48.438 15:12:34 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:48.438 15:12:34 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:48.438 15:12:34 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:04:48.438 15:12:34 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:04:48.438 15:12:34 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:04:48.438 15:12:34 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:04:48.438 15:12:34 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:04:48.438 15:12:34 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:04:48.438 15:12:34 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:04:48.438 15:12:34 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:48.438 15:12:34 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:04:48.438 15:12:34 rpc_client -- scripts/common.sh@345 -- # : 1 00:04:48.438 15:12:34 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:48.438 15:12:34 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:48.438 15:12:34 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:04:48.438 15:12:34 rpc_client -- scripts/common.sh@353 -- # local d=1 00:04:48.438 15:12:34 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:48.438 15:12:34 rpc_client -- scripts/common.sh@355 -- # echo 1 00:04:48.438 15:12:34 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:04:48.438 15:12:34 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:04:48.438 15:12:34 rpc_client -- scripts/common.sh@353 -- # local d=2 00:04:48.438 15:12:34 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:48.438 15:12:34 rpc_client -- scripts/common.sh@355 -- # echo 2 00:04:48.438 15:12:34 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:04:48.438 15:12:34 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:48.438 15:12:34 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:48.438 15:12:34 rpc_client -- scripts/common.sh@368 -- # return 0 00:04:48.438 15:12:34 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:48.438 15:12:34 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:48.438 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:48.438 --rc genhtml_branch_coverage=1 00:04:48.438 --rc genhtml_function_coverage=1 00:04:48.438 --rc genhtml_legend=1 00:04:48.438 --rc geninfo_all_blocks=1 00:04:48.438 --rc geninfo_unexecuted_blocks=1 00:04:48.438 00:04:48.438 ' 00:04:48.438 15:12:34 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:48.438 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:48.438 --rc genhtml_branch_coverage=1 00:04:48.438 --rc genhtml_function_coverage=1 00:04:48.438 --rc genhtml_legend=1 00:04:48.438 --rc geninfo_all_blocks=1 00:04:48.438 --rc geninfo_unexecuted_blocks=1 00:04:48.438 00:04:48.438 ' 00:04:48.438 15:12:34 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:48.438 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:48.438 --rc genhtml_branch_coverage=1 00:04:48.438 --rc genhtml_function_coverage=1 00:04:48.438 --rc genhtml_legend=1 00:04:48.438 --rc geninfo_all_blocks=1 00:04:48.438 --rc geninfo_unexecuted_blocks=1 00:04:48.438 00:04:48.438 ' 00:04:48.438 15:12:34 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:48.438 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:48.438 --rc genhtml_branch_coverage=1 00:04:48.438 --rc genhtml_function_coverage=1 00:04:48.438 --rc genhtml_legend=1 00:04:48.438 --rc geninfo_all_blocks=1 00:04:48.438 --rc geninfo_unexecuted_blocks=1 00:04:48.438 00:04:48.438 ' 00:04:48.438 15:12:34 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:04:48.696 OK 00:04:48.696 15:12:34 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:48.696 00:04:48.696 real 0m0.305s 00:04:48.696 user 0m0.169s 00:04:48.696 sys 0m0.156s 00:04:48.696 ************************************ 00:04:48.696 END TEST rpc_client 00:04:48.696 ************************************ 00:04:48.696 15:12:34 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:48.696 15:12:34 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:48.696 15:12:35 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:48.696 15:12:35 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:48.696 15:12:35 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:48.696 15:12:35 -- common/autotest_common.sh@10 -- # set +x 00:04:48.696 ************************************ 00:04:48.696 START TEST json_config 00:04:48.696 ************************************ 00:04:48.696 15:12:35 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:48.696 15:12:35 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:48.696 15:12:35 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:04:48.696 15:12:35 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:48.960 15:12:35 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:48.960 15:12:35 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:48.960 15:12:35 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:48.960 15:12:35 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:48.960 15:12:35 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:04:48.960 15:12:35 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:04:48.960 15:12:35 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:04:48.960 15:12:35 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:04:48.960 15:12:35 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:04:48.960 15:12:35 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:04:48.960 15:12:35 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:04:48.960 15:12:35 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:48.960 15:12:35 json_config -- scripts/common.sh@344 -- # case "$op" in 00:04:48.960 15:12:35 json_config -- scripts/common.sh@345 -- # : 1 00:04:48.960 15:12:35 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:48.960 15:12:35 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:48.960 15:12:35 json_config -- scripts/common.sh@365 -- # decimal 1 00:04:48.960 15:12:35 json_config -- scripts/common.sh@353 -- # local d=1 00:04:48.960 15:12:35 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:48.960 15:12:35 json_config -- scripts/common.sh@355 -- # echo 1 00:04:48.960 15:12:35 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:04:48.960 15:12:35 json_config -- scripts/common.sh@366 -- # decimal 2 00:04:48.960 15:12:35 json_config -- scripts/common.sh@353 -- # local d=2 00:04:48.960 15:12:35 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:48.960 15:12:35 json_config -- scripts/common.sh@355 -- # echo 2 00:04:48.960 15:12:35 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:04:48.960 15:12:35 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:48.960 15:12:35 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:48.960 15:12:35 json_config -- scripts/common.sh@368 -- # return 0 00:04:48.960 15:12:35 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:48.960 15:12:35 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:48.960 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:48.960 --rc genhtml_branch_coverage=1 00:04:48.960 --rc genhtml_function_coverage=1 00:04:48.960 --rc genhtml_legend=1 00:04:48.960 --rc geninfo_all_blocks=1 00:04:48.960 --rc geninfo_unexecuted_blocks=1 00:04:48.960 00:04:48.960 ' 00:04:48.960 15:12:35 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:48.960 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:48.960 --rc genhtml_branch_coverage=1 00:04:48.960 --rc genhtml_function_coverage=1 00:04:48.960 --rc genhtml_legend=1 00:04:48.960 --rc geninfo_all_blocks=1 00:04:48.960 --rc geninfo_unexecuted_blocks=1 00:04:48.960 00:04:48.960 ' 00:04:48.960 15:12:35 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:48.960 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:48.960 --rc genhtml_branch_coverage=1 00:04:48.960 --rc genhtml_function_coverage=1 00:04:48.960 --rc genhtml_legend=1 00:04:48.960 --rc geninfo_all_blocks=1 00:04:48.960 --rc geninfo_unexecuted_blocks=1 00:04:48.960 00:04:48.960 ' 00:04:48.960 15:12:35 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:48.960 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:48.960 --rc genhtml_branch_coverage=1 00:04:48.960 --rc genhtml_function_coverage=1 00:04:48.960 --rc genhtml_legend=1 00:04:48.960 --rc geninfo_all_blocks=1 00:04:48.960 --rc geninfo_unexecuted_blocks=1 00:04:48.960 00:04:48.960 ' 00:04:48.960 15:12:35 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:48.960 15:12:35 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:48.960 15:12:35 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:48.960 15:12:35 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:48.960 15:12:35 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:48.960 15:12:35 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:48.960 15:12:35 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:48.960 15:12:35 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:48.960 15:12:35 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:48.960 15:12:35 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:48.960 15:12:35 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:48.960 15:12:35 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:48.960 15:12:35 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f2c1538a-d621-4ee3-bb31-0925b497de45 00:04:48.960 15:12:35 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=f2c1538a-d621-4ee3-bb31-0925b497de45 00:04:48.960 15:12:35 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:48.960 15:12:35 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:48.960 15:12:35 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:48.960 15:12:35 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:48.960 15:12:35 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:48.960 15:12:35 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:04:48.960 15:12:35 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:48.960 15:12:35 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:48.960 15:12:35 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:48.960 15:12:35 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:48.960 15:12:35 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:48.960 15:12:35 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:48.960 15:12:35 json_config -- paths/export.sh@5 -- # export PATH 00:04:48.960 15:12:35 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:48.960 15:12:35 json_config -- nvmf/common.sh@51 -- # : 0 00:04:48.960 15:12:35 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:48.960 15:12:35 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:48.960 15:12:35 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:48.960 15:12:35 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:48.961 15:12:35 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:48.961 15:12:35 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:48.961 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:48.961 15:12:35 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:48.961 15:12:35 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:48.961 15:12:35 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:48.961 15:12:35 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:48.961 15:12:35 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:48.961 15:12:35 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:48.961 15:12:35 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:48.961 15:12:35 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:48.961 WARNING: No tests are enabled so not running JSON configuration tests 00:04:48.961 15:12:35 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:04:48.961 15:12:35 json_config -- json_config/json_config.sh@28 -- # exit 0 00:04:48.961 00:04:48.961 real 0m0.236s 00:04:48.961 user 0m0.119s 00:04:48.961 sys 0m0.113s 00:04:48.961 15:12:35 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:48.961 15:12:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:48.961 ************************************ 00:04:48.961 END TEST json_config 00:04:48.961 ************************************ 00:04:48.961 15:12:35 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:48.961 15:12:35 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:48.961 15:12:35 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:48.961 15:12:35 -- common/autotest_common.sh@10 -- # set +x 00:04:48.961 ************************************ 00:04:48.961 START TEST json_config_extra_key 00:04:48.961 ************************************ 00:04:48.961 15:12:35 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:49.220 15:12:35 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:49.220 15:12:35 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:04:49.220 15:12:35 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:49.220 15:12:35 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:49.220 15:12:35 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:49.220 15:12:35 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:49.220 15:12:35 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:49.220 15:12:35 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:04:49.220 15:12:35 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:04:49.220 15:12:35 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:04:49.220 15:12:35 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:04:49.220 15:12:35 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:04:49.220 15:12:35 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:04:49.220 15:12:35 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:04:49.220 15:12:35 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:49.220 15:12:35 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:04:49.220 15:12:35 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:04:49.220 15:12:35 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:49.220 15:12:35 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:49.220 15:12:35 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:04:49.220 15:12:35 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:04:49.220 15:12:35 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:49.220 15:12:35 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:04:49.220 15:12:35 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:04:49.220 15:12:35 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:04:49.220 15:12:35 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:04:49.220 15:12:35 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:49.220 15:12:35 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:04:49.220 15:12:35 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:04:49.220 15:12:35 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:49.220 15:12:35 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:49.220 15:12:35 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:04:49.220 15:12:35 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:49.220 15:12:35 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:49.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.220 --rc genhtml_branch_coverage=1 00:04:49.220 --rc genhtml_function_coverage=1 00:04:49.220 --rc genhtml_legend=1 00:04:49.220 --rc geninfo_all_blocks=1 00:04:49.220 --rc geninfo_unexecuted_blocks=1 00:04:49.220 00:04:49.220 ' 00:04:49.220 15:12:35 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:49.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.220 --rc genhtml_branch_coverage=1 00:04:49.220 --rc genhtml_function_coverage=1 00:04:49.220 --rc genhtml_legend=1 00:04:49.220 --rc geninfo_all_blocks=1 00:04:49.220 --rc geninfo_unexecuted_blocks=1 00:04:49.220 00:04:49.220 ' 00:04:49.220 15:12:35 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:49.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.220 --rc genhtml_branch_coverage=1 00:04:49.220 --rc genhtml_function_coverage=1 00:04:49.220 --rc genhtml_legend=1 00:04:49.220 --rc geninfo_all_blocks=1 00:04:49.220 --rc geninfo_unexecuted_blocks=1 00:04:49.220 00:04:49.220 ' 00:04:49.220 15:12:35 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:49.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.220 --rc genhtml_branch_coverage=1 00:04:49.220 --rc genhtml_function_coverage=1 00:04:49.220 --rc genhtml_legend=1 00:04:49.220 --rc geninfo_all_blocks=1 00:04:49.220 --rc geninfo_unexecuted_blocks=1 00:04:49.220 00:04:49.220 ' 00:04:49.220 15:12:35 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:49.220 15:12:35 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:49.220 15:12:35 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:49.220 15:12:35 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:49.220 15:12:35 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:49.220 15:12:35 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:49.220 15:12:35 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:49.220 15:12:35 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:49.220 15:12:35 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:49.220 15:12:35 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:49.220 15:12:35 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:49.220 15:12:35 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:49.220 15:12:35 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f2c1538a-d621-4ee3-bb31-0925b497de45 00:04:49.220 15:12:35 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=f2c1538a-d621-4ee3-bb31-0925b497de45 00:04:49.220 15:12:35 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:49.220 15:12:35 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:49.220 15:12:35 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:49.220 15:12:35 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:49.220 15:12:35 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:49.220 15:12:35 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:04:49.220 15:12:35 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:49.220 15:12:35 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:49.220 15:12:35 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:49.220 15:12:35 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:49.220 15:12:35 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:49.220 15:12:35 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:49.220 15:12:35 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:49.220 15:12:35 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:49.220 15:12:35 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:04:49.220 15:12:35 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:49.220 15:12:35 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:49.220 15:12:35 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:49.220 15:12:35 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:49.220 15:12:35 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:49.220 15:12:35 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:49.220 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:49.220 15:12:35 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:49.220 15:12:35 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:49.220 15:12:35 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:49.220 15:12:35 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:49.220 15:12:35 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:49.221 15:12:35 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:49.221 15:12:35 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:49.221 15:12:35 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:49.221 15:12:35 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:49.221 15:12:35 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:49.221 15:12:35 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:04:49.221 15:12:35 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:49.221 15:12:35 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:49.221 INFO: launching applications... 00:04:49.221 15:12:35 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:49.221 15:12:35 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:49.221 15:12:35 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:49.221 15:12:35 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:49.221 15:12:35 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:49.221 15:12:35 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:49.221 15:12:35 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:49.221 15:12:35 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:49.221 15:12:35 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:49.221 15:12:35 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=57514 00:04:49.221 Waiting for target to run... 00:04:49.221 15:12:35 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:49.221 15:12:35 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 57514 /var/tmp/spdk_tgt.sock 00:04:49.221 15:12:35 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 57514 ']' 00:04:49.221 15:12:35 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:49.221 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:49.221 15:12:35 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:49.221 15:12:35 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:49.221 15:12:35 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:49.221 15:12:35 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:49.221 15:12:35 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:49.221 [2024-11-20 15:12:35.687735] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:04:49.221 [2024-11-20 15:12:35.687854] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57514 ] 00:04:49.788 [2024-11-20 15:12:36.094075] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:49.788 [2024-11-20 15:12:36.203602] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:50.724 15:12:36 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:50.724 15:12:36 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:04:50.724 00:04:50.724 15:12:36 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:50.724 INFO: shutting down applications... 00:04:50.724 15:12:36 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:50.724 15:12:36 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:50.724 15:12:36 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:50.724 15:12:36 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:50.724 15:12:36 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 57514 ]] 00:04:50.724 15:12:36 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 57514 00:04:50.724 15:12:36 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:50.724 15:12:36 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:50.724 15:12:36 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57514 00:04:50.724 15:12:36 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:51.046 15:12:37 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:51.046 15:12:37 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:51.046 15:12:37 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57514 00:04:51.046 15:12:37 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:51.615 15:12:37 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:51.615 15:12:37 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:51.615 15:12:37 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57514 00:04:51.615 15:12:37 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:52.182 15:12:38 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:52.182 15:12:38 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:52.182 15:12:38 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57514 00:04:52.182 15:12:38 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:52.440 15:12:38 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:52.440 15:12:38 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:52.440 15:12:38 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57514 00:04:52.441 15:12:38 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:53.007 15:12:39 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:53.007 15:12:39 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:53.007 15:12:39 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57514 00:04:53.007 15:12:39 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:53.576 15:12:39 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:53.576 15:12:39 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:53.576 15:12:39 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57514 00:04:53.576 15:12:39 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:53.576 15:12:39 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:53.576 15:12:39 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:53.576 SPDK target shutdown done 00:04:53.576 15:12:39 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:53.576 Success 00:04:53.576 15:12:39 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:53.576 00:04:53.576 real 0m4.542s 00:04:53.576 user 0m4.020s 00:04:53.576 sys 0m0.588s 00:04:53.576 15:12:39 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:53.576 15:12:39 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:53.576 ************************************ 00:04:53.576 END TEST json_config_extra_key 00:04:53.576 ************************************ 00:04:53.576 15:12:39 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:53.576 15:12:39 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:53.576 15:12:39 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:53.576 15:12:39 -- common/autotest_common.sh@10 -- # set +x 00:04:53.576 ************************************ 00:04:53.576 START TEST alias_rpc 00:04:53.576 ************************************ 00:04:53.576 15:12:39 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:53.835 * Looking for test storage... 00:04:53.835 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:04:53.835 15:12:40 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:53.835 15:12:40 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:53.835 15:12:40 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:53.835 15:12:40 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:53.835 15:12:40 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:53.835 15:12:40 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:53.835 15:12:40 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:53.835 15:12:40 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:53.835 15:12:40 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:53.835 15:12:40 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:53.835 15:12:40 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:53.835 15:12:40 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:53.835 15:12:40 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:53.835 15:12:40 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:53.835 15:12:40 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:53.835 15:12:40 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:53.835 15:12:40 alias_rpc -- scripts/common.sh@345 -- # : 1 00:04:53.835 15:12:40 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:53.835 15:12:40 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:53.835 15:12:40 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:53.835 15:12:40 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:04:53.835 15:12:40 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:53.835 15:12:40 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:04:53.835 15:12:40 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:53.835 15:12:40 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:53.835 15:12:40 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:04:53.835 15:12:40 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:53.835 15:12:40 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:04:53.835 15:12:40 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:53.835 15:12:40 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:53.835 15:12:40 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:53.835 15:12:40 alias_rpc -- scripts/common.sh@368 -- # return 0 00:04:53.835 15:12:40 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:53.835 15:12:40 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:53.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.835 --rc genhtml_branch_coverage=1 00:04:53.835 --rc genhtml_function_coverage=1 00:04:53.835 --rc genhtml_legend=1 00:04:53.835 --rc geninfo_all_blocks=1 00:04:53.835 --rc geninfo_unexecuted_blocks=1 00:04:53.835 00:04:53.835 ' 00:04:53.835 15:12:40 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:53.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.835 --rc genhtml_branch_coverage=1 00:04:53.835 --rc genhtml_function_coverage=1 00:04:53.835 --rc genhtml_legend=1 00:04:53.835 --rc geninfo_all_blocks=1 00:04:53.835 --rc geninfo_unexecuted_blocks=1 00:04:53.835 00:04:53.835 ' 00:04:53.835 15:12:40 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:53.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.835 --rc genhtml_branch_coverage=1 00:04:53.835 --rc genhtml_function_coverage=1 00:04:53.835 --rc genhtml_legend=1 00:04:53.835 --rc geninfo_all_blocks=1 00:04:53.835 --rc geninfo_unexecuted_blocks=1 00:04:53.835 00:04:53.835 ' 00:04:53.835 15:12:40 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:53.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.835 --rc genhtml_branch_coverage=1 00:04:53.835 --rc genhtml_function_coverage=1 00:04:53.835 --rc genhtml_legend=1 00:04:53.835 --rc geninfo_all_blocks=1 00:04:53.835 --rc geninfo_unexecuted_blocks=1 00:04:53.835 00:04:53.835 ' 00:04:53.835 15:12:40 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:53.835 15:12:40 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=57631 00:04:53.835 15:12:40 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:53.835 15:12:40 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 57631 00:04:53.835 15:12:40 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 57631 ']' 00:04:53.835 15:12:40 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:53.835 15:12:40 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:53.835 15:12:40 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:53.835 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:53.835 15:12:40 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:53.835 15:12:40 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:53.835 [2024-11-20 15:12:40.295906] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:04:53.835 [2024-11-20 15:12:40.296030] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57631 ] 00:04:54.094 [2024-11-20 15:12:40.478619] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:54.353 [2024-11-20 15:12:40.602205] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:55.289 15:12:41 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:55.289 15:12:41 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:55.290 15:12:41 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:04:55.549 15:12:41 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 57631 00:04:55.549 15:12:41 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 57631 ']' 00:04:55.549 15:12:41 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 57631 00:04:55.549 15:12:41 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:04:55.549 15:12:41 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:55.549 15:12:41 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57631 00:04:55.549 15:12:41 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:55.549 15:12:41 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:55.549 killing process with pid 57631 00:04:55.549 15:12:41 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57631' 00:04:55.549 15:12:41 alias_rpc -- common/autotest_common.sh@973 -- # kill 57631 00:04:55.549 15:12:41 alias_rpc -- common/autotest_common.sh@978 -- # wait 57631 00:04:58.128 00:04:58.128 real 0m4.294s 00:04:58.128 user 0m4.305s 00:04:58.128 sys 0m0.638s 00:04:58.128 15:12:44 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:58.128 15:12:44 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:58.128 ************************************ 00:04:58.128 END TEST alias_rpc 00:04:58.128 ************************************ 00:04:58.128 15:12:44 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:04:58.128 15:12:44 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:58.128 15:12:44 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:58.128 15:12:44 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:58.128 15:12:44 -- common/autotest_common.sh@10 -- # set +x 00:04:58.128 ************************************ 00:04:58.128 START TEST spdkcli_tcp 00:04:58.128 ************************************ 00:04:58.128 15:12:44 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:58.128 * Looking for test storage... 00:04:58.128 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:04:58.128 15:12:44 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:58.128 15:12:44 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:04:58.128 15:12:44 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:58.128 15:12:44 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:58.128 15:12:44 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:58.128 15:12:44 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:58.128 15:12:44 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:58.128 15:12:44 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:04:58.128 15:12:44 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:04:58.128 15:12:44 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:04:58.128 15:12:44 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:04:58.128 15:12:44 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:04:58.128 15:12:44 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:04:58.128 15:12:44 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:04:58.128 15:12:44 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:58.128 15:12:44 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:04:58.128 15:12:44 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:04:58.128 15:12:44 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:58.128 15:12:44 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:58.128 15:12:44 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:04:58.128 15:12:44 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:04:58.128 15:12:44 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:58.128 15:12:44 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:04:58.128 15:12:44 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:04:58.128 15:12:44 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:04:58.128 15:12:44 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:04:58.128 15:12:44 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:58.128 15:12:44 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:04:58.128 15:12:44 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:04:58.128 15:12:44 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:58.128 15:12:44 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:58.128 15:12:44 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:04:58.129 15:12:44 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:58.129 15:12:44 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:58.129 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.129 --rc genhtml_branch_coverage=1 00:04:58.129 --rc genhtml_function_coverage=1 00:04:58.129 --rc genhtml_legend=1 00:04:58.129 --rc geninfo_all_blocks=1 00:04:58.129 --rc geninfo_unexecuted_blocks=1 00:04:58.129 00:04:58.129 ' 00:04:58.129 15:12:44 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:58.129 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.129 --rc genhtml_branch_coverage=1 00:04:58.129 --rc genhtml_function_coverage=1 00:04:58.129 --rc genhtml_legend=1 00:04:58.129 --rc geninfo_all_blocks=1 00:04:58.129 --rc geninfo_unexecuted_blocks=1 00:04:58.129 00:04:58.129 ' 00:04:58.129 15:12:44 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:58.129 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.129 --rc genhtml_branch_coverage=1 00:04:58.129 --rc genhtml_function_coverage=1 00:04:58.129 --rc genhtml_legend=1 00:04:58.129 --rc geninfo_all_blocks=1 00:04:58.129 --rc geninfo_unexecuted_blocks=1 00:04:58.129 00:04:58.129 ' 00:04:58.129 15:12:44 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:58.129 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.129 --rc genhtml_branch_coverage=1 00:04:58.129 --rc genhtml_function_coverage=1 00:04:58.129 --rc genhtml_legend=1 00:04:58.129 --rc geninfo_all_blocks=1 00:04:58.129 --rc geninfo_unexecuted_blocks=1 00:04:58.129 00:04:58.129 ' 00:04:58.129 15:12:44 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:04:58.129 15:12:44 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:04:58.129 15:12:44 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:04:58.129 15:12:44 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:58.129 15:12:44 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:58.129 15:12:44 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:58.129 15:12:44 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:58.129 15:12:44 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:58.129 15:12:44 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:58.129 15:12:44 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=57738 00:04:58.129 15:12:44 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:58.129 15:12:44 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 57738 00:04:58.129 15:12:44 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 57738 ']' 00:04:58.129 15:12:44 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:58.129 15:12:44 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:58.129 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:58.129 15:12:44 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:58.129 15:12:44 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:58.129 15:12:44 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:58.388 [2024-11-20 15:12:44.660126] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:04:58.388 [2024-11-20 15:12:44.660252] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57738 ] 00:04:58.388 [2024-11-20 15:12:44.843332] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:58.648 [2024-11-20 15:12:44.961681] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:58.648 [2024-11-20 15:12:44.961733] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:59.585 15:12:45 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:59.585 15:12:45 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:04:59.585 15:12:45 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=57755 00:04:59.586 15:12:45 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:59.586 15:12:45 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:59.586 [ 00:04:59.586 "bdev_malloc_delete", 00:04:59.586 "bdev_malloc_create", 00:04:59.586 "bdev_null_resize", 00:04:59.586 "bdev_null_delete", 00:04:59.586 "bdev_null_create", 00:04:59.586 "bdev_nvme_cuse_unregister", 00:04:59.586 "bdev_nvme_cuse_register", 00:04:59.586 "bdev_opal_new_user", 00:04:59.586 "bdev_opal_set_lock_state", 00:04:59.586 "bdev_opal_delete", 00:04:59.586 "bdev_opal_get_info", 00:04:59.586 "bdev_opal_create", 00:04:59.586 "bdev_nvme_opal_revert", 00:04:59.586 "bdev_nvme_opal_init", 00:04:59.586 "bdev_nvme_send_cmd", 00:04:59.586 "bdev_nvme_set_keys", 00:04:59.586 "bdev_nvme_get_path_iostat", 00:04:59.586 "bdev_nvme_get_mdns_discovery_info", 00:04:59.586 "bdev_nvme_stop_mdns_discovery", 00:04:59.586 "bdev_nvme_start_mdns_discovery", 00:04:59.586 "bdev_nvme_set_multipath_policy", 00:04:59.586 "bdev_nvme_set_preferred_path", 00:04:59.586 "bdev_nvme_get_io_paths", 00:04:59.586 "bdev_nvme_remove_error_injection", 00:04:59.586 "bdev_nvme_add_error_injection", 00:04:59.586 "bdev_nvme_get_discovery_info", 00:04:59.586 "bdev_nvme_stop_discovery", 00:04:59.586 "bdev_nvme_start_discovery", 00:04:59.586 "bdev_nvme_get_controller_health_info", 00:04:59.586 "bdev_nvme_disable_controller", 00:04:59.586 "bdev_nvme_enable_controller", 00:04:59.586 "bdev_nvme_reset_controller", 00:04:59.586 "bdev_nvme_get_transport_statistics", 00:04:59.586 "bdev_nvme_apply_firmware", 00:04:59.586 "bdev_nvme_detach_controller", 00:04:59.586 "bdev_nvme_get_controllers", 00:04:59.586 "bdev_nvme_attach_controller", 00:04:59.586 "bdev_nvme_set_hotplug", 00:04:59.586 "bdev_nvme_set_options", 00:04:59.586 "bdev_passthru_delete", 00:04:59.586 "bdev_passthru_create", 00:04:59.586 "bdev_lvol_set_parent_bdev", 00:04:59.586 "bdev_lvol_set_parent", 00:04:59.586 "bdev_lvol_check_shallow_copy", 00:04:59.586 "bdev_lvol_start_shallow_copy", 00:04:59.586 "bdev_lvol_grow_lvstore", 00:04:59.586 "bdev_lvol_get_lvols", 00:04:59.586 "bdev_lvol_get_lvstores", 00:04:59.586 "bdev_lvol_delete", 00:04:59.586 "bdev_lvol_set_read_only", 00:04:59.586 "bdev_lvol_resize", 00:04:59.586 "bdev_lvol_decouple_parent", 00:04:59.586 "bdev_lvol_inflate", 00:04:59.586 "bdev_lvol_rename", 00:04:59.586 "bdev_lvol_clone_bdev", 00:04:59.586 "bdev_lvol_clone", 00:04:59.586 "bdev_lvol_snapshot", 00:04:59.586 "bdev_lvol_create", 00:04:59.586 "bdev_lvol_delete_lvstore", 00:04:59.586 "bdev_lvol_rename_lvstore", 00:04:59.586 "bdev_lvol_create_lvstore", 00:04:59.586 "bdev_raid_set_options", 00:04:59.586 "bdev_raid_remove_base_bdev", 00:04:59.586 "bdev_raid_add_base_bdev", 00:04:59.586 "bdev_raid_delete", 00:04:59.586 "bdev_raid_create", 00:04:59.586 "bdev_raid_get_bdevs", 00:04:59.586 "bdev_error_inject_error", 00:04:59.586 "bdev_error_delete", 00:04:59.586 "bdev_error_create", 00:04:59.586 "bdev_split_delete", 00:04:59.586 "bdev_split_create", 00:04:59.586 "bdev_delay_delete", 00:04:59.586 "bdev_delay_create", 00:04:59.586 "bdev_delay_update_latency", 00:04:59.586 "bdev_zone_block_delete", 00:04:59.586 "bdev_zone_block_create", 00:04:59.586 "blobfs_create", 00:04:59.586 "blobfs_detect", 00:04:59.586 "blobfs_set_cache_size", 00:04:59.586 "bdev_aio_delete", 00:04:59.586 "bdev_aio_rescan", 00:04:59.586 "bdev_aio_create", 00:04:59.586 "bdev_ftl_set_property", 00:04:59.586 "bdev_ftl_get_properties", 00:04:59.586 "bdev_ftl_get_stats", 00:04:59.586 "bdev_ftl_unmap", 00:04:59.586 "bdev_ftl_unload", 00:04:59.586 "bdev_ftl_delete", 00:04:59.586 "bdev_ftl_load", 00:04:59.586 "bdev_ftl_create", 00:04:59.586 "bdev_virtio_attach_controller", 00:04:59.586 "bdev_virtio_scsi_get_devices", 00:04:59.586 "bdev_virtio_detach_controller", 00:04:59.586 "bdev_virtio_blk_set_hotplug", 00:04:59.586 "bdev_iscsi_delete", 00:04:59.586 "bdev_iscsi_create", 00:04:59.586 "bdev_iscsi_set_options", 00:04:59.586 "accel_error_inject_error", 00:04:59.586 "ioat_scan_accel_module", 00:04:59.586 "dsa_scan_accel_module", 00:04:59.586 "iaa_scan_accel_module", 00:04:59.586 "keyring_file_remove_key", 00:04:59.586 "keyring_file_add_key", 00:04:59.586 "keyring_linux_set_options", 00:04:59.586 "fsdev_aio_delete", 00:04:59.586 "fsdev_aio_create", 00:04:59.586 "iscsi_get_histogram", 00:04:59.586 "iscsi_enable_histogram", 00:04:59.586 "iscsi_set_options", 00:04:59.586 "iscsi_get_auth_groups", 00:04:59.586 "iscsi_auth_group_remove_secret", 00:04:59.586 "iscsi_auth_group_add_secret", 00:04:59.586 "iscsi_delete_auth_group", 00:04:59.586 "iscsi_create_auth_group", 00:04:59.586 "iscsi_set_discovery_auth", 00:04:59.586 "iscsi_get_options", 00:04:59.586 "iscsi_target_node_request_logout", 00:04:59.586 "iscsi_target_node_set_redirect", 00:04:59.586 "iscsi_target_node_set_auth", 00:04:59.586 "iscsi_target_node_add_lun", 00:04:59.586 "iscsi_get_stats", 00:04:59.586 "iscsi_get_connections", 00:04:59.586 "iscsi_portal_group_set_auth", 00:04:59.586 "iscsi_start_portal_group", 00:04:59.586 "iscsi_delete_portal_group", 00:04:59.586 "iscsi_create_portal_group", 00:04:59.586 "iscsi_get_portal_groups", 00:04:59.586 "iscsi_delete_target_node", 00:04:59.586 "iscsi_target_node_remove_pg_ig_maps", 00:04:59.586 "iscsi_target_node_add_pg_ig_maps", 00:04:59.586 "iscsi_create_target_node", 00:04:59.586 "iscsi_get_target_nodes", 00:04:59.586 "iscsi_delete_initiator_group", 00:04:59.586 "iscsi_initiator_group_remove_initiators", 00:04:59.586 "iscsi_initiator_group_add_initiators", 00:04:59.586 "iscsi_create_initiator_group", 00:04:59.586 "iscsi_get_initiator_groups", 00:04:59.586 "nvmf_set_crdt", 00:04:59.586 "nvmf_set_config", 00:04:59.586 "nvmf_set_max_subsystems", 00:04:59.586 "nvmf_stop_mdns_prr", 00:04:59.586 "nvmf_publish_mdns_prr", 00:04:59.586 "nvmf_subsystem_get_listeners", 00:04:59.586 "nvmf_subsystem_get_qpairs", 00:04:59.586 "nvmf_subsystem_get_controllers", 00:04:59.586 "nvmf_get_stats", 00:04:59.586 "nvmf_get_transports", 00:04:59.586 "nvmf_create_transport", 00:04:59.586 "nvmf_get_targets", 00:04:59.586 "nvmf_delete_target", 00:04:59.586 "nvmf_create_target", 00:04:59.586 "nvmf_subsystem_allow_any_host", 00:04:59.586 "nvmf_subsystem_set_keys", 00:04:59.586 "nvmf_subsystem_remove_host", 00:04:59.586 "nvmf_subsystem_add_host", 00:04:59.586 "nvmf_ns_remove_host", 00:04:59.586 "nvmf_ns_add_host", 00:04:59.586 "nvmf_subsystem_remove_ns", 00:04:59.586 "nvmf_subsystem_set_ns_ana_group", 00:04:59.586 "nvmf_subsystem_add_ns", 00:04:59.586 "nvmf_subsystem_listener_set_ana_state", 00:04:59.586 "nvmf_discovery_get_referrals", 00:04:59.586 "nvmf_discovery_remove_referral", 00:04:59.586 "nvmf_discovery_add_referral", 00:04:59.586 "nvmf_subsystem_remove_listener", 00:04:59.587 "nvmf_subsystem_add_listener", 00:04:59.587 "nvmf_delete_subsystem", 00:04:59.587 "nvmf_create_subsystem", 00:04:59.587 "nvmf_get_subsystems", 00:04:59.587 "env_dpdk_get_mem_stats", 00:04:59.587 "nbd_get_disks", 00:04:59.587 "nbd_stop_disk", 00:04:59.587 "nbd_start_disk", 00:04:59.587 "ublk_recover_disk", 00:04:59.587 "ublk_get_disks", 00:04:59.587 "ublk_stop_disk", 00:04:59.587 "ublk_start_disk", 00:04:59.587 "ublk_destroy_target", 00:04:59.587 "ublk_create_target", 00:04:59.587 "virtio_blk_create_transport", 00:04:59.587 "virtio_blk_get_transports", 00:04:59.587 "vhost_controller_set_coalescing", 00:04:59.587 "vhost_get_controllers", 00:04:59.587 "vhost_delete_controller", 00:04:59.587 "vhost_create_blk_controller", 00:04:59.587 "vhost_scsi_controller_remove_target", 00:04:59.587 "vhost_scsi_controller_add_target", 00:04:59.587 "vhost_start_scsi_controller", 00:04:59.587 "vhost_create_scsi_controller", 00:04:59.587 "thread_set_cpumask", 00:04:59.587 "scheduler_set_options", 00:04:59.587 "framework_get_governor", 00:04:59.587 "framework_get_scheduler", 00:04:59.587 "framework_set_scheduler", 00:04:59.587 "framework_get_reactors", 00:04:59.587 "thread_get_io_channels", 00:04:59.587 "thread_get_pollers", 00:04:59.587 "thread_get_stats", 00:04:59.587 "framework_monitor_context_switch", 00:04:59.587 "spdk_kill_instance", 00:04:59.587 "log_enable_timestamps", 00:04:59.587 "log_get_flags", 00:04:59.587 "log_clear_flag", 00:04:59.587 "log_set_flag", 00:04:59.587 "log_get_level", 00:04:59.587 "log_set_level", 00:04:59.587 "log_get_print_level", 00:04:59.587 "log_set_print_level", 00:04:59.587 "framework_enable_cpumask_locks", 00:04:59.587 "framework_disable_cpumask_locks", 00:04:59.587 "framework_wait_init", 00:04:59.587 "framework_start_init", 00:04:59.587 "scsi_get_devices", 00:04:59.587 "bdev_get_histogram", 00:04:59.587 "bdev_enable_histogram", 00:04:59.587 "bdev_set_qos_limit", 00:04:59.587 "bdev_set_qd_sampling_period", 00:04:59.587 "bdev_get_bdevs", 00:04:59.587 "bdev_reset_iostat", 00:04:59.587 "bdev_get_iostat", 00:04:59.587 "bdev_examine", 00:04:59.587 "bdev_wait_for_examine", 00:04:59.587 "bdev_set_options", 00:04:59.587 "accel_get_stats", 00:04:59.587 "accel_set_options", 00:04:59.587 "accel_set_driver", 00:04:59.587 "accel_crypto_key_destroy", 00:04:59.587 "accel_crypto_keys_get", 00:04:59.587 "accel_crypto_key_create", 00:04:59.587 "accel_assign_opc", 00:04:59.587 "accel_get_module_info", 00:04:59.587 "accel_get_opc_assignments", 00:04:59.587 "vmd_rescan", 00:04:59.587 "vmd_remove_device", 00:04:59.587 "vmd_enable", 00:04:59.587 "sock_get_default_impl", 00:04:59.587 "sock_set_default_impl", 00:04:59.587 "sock_impl_set_options", 00:04:59.587 "sock_impl_get_options", 00:04:59.587 "iobuf_get_stats", 00:04:59.587 "iobuf_set_options", 00:04:59.587 "keyring_get_keys", 00:04:59.587 "framework_get_pci_devices", 00:04:59.587 "framework_get_config", 00:04:59.587 "framework_get_subsystems", 00:04:59.587 "fsdev_set_opts", 00:04:59.587 "fsdev_get_opts", 00:04:59.587 "trace_get_info", 00:04:59.587 "trace_get_tpoint_group_mask", 00:04:59.587 "trace_disable_tpoint_group", 00:04:59.587 "trace_enable_tpoint_group", 00:04:59.587 "trace_clear_tpoint_mask", 00:04:59.587 "trace_set_tpoint_mask", 00:04:59.587 "notify_get_notifications", 00:04:59.587 "notify_get_types", 00:04:59.587 "spdk_get_version", 00:04:59.587 "rpc_get_methods" 00:04:59.587 ] 00:04:59.587 15:12:46 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:59.587 15:12:46 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:59.587 15:12:46 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:59.846 15:12:46 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:59.846 15:12:46 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 57738 00:04:59.846 15:12:46 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 57738 ']' 00:04:59.846 15:12:46 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 57738 00:04:59.846 15:12:46 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:04:59.846 15:12:46 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:59.846 15:12:46 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57738 00:04:59.846 killing process with pid 57738 00:04:59.846 15:12:46 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:59.846 15:12:46 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:59.846 15:12:46 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57738' 00:04:59.846 15:12:46 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 57738 00:04:59.846 15:12:46 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 57738 00:05:02.440 ************************************ 00:05:02.440 END TEST spdkcli_tcp 00:05:02.440 ************************************ 00:05:02.440 00:05:02.440 real 0m4.247s 00:05:02.440 user 0m7.587s 00:05:02.440 sys 0m0.707s 00:05:02.440 15:12:48 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:02.440 15:12:48 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:02.440 15:12:48 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:02.440 15:12:48 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:02.440 15:12:48 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:02.440 15:12:48 -- common/autotest_common.sh@10 -- # set +x 00:05:02.440 ************************************ 00:05:02.440 START TEST dpdk_mem_utility 00:05:02.440 ************************************ 00:05:02.440 15:12:48 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:02.440 * Looking for test storage... 00:05:02.440 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:05:02.440 15:12:48 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:02.440 15:12:48 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:05:02.440 15:12:48 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:02.440 15:12:48 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:02.440 15:12:48 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:02.440 15:12:48 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:02.440 15:12:48 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:02.440 15:12:48 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:02.440 15:12:48 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:02.440 15:12:48 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:02.440 15:12:48 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:02.440 15:12:48 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:02.440 15:12:48 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:02.440 15:12:48 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:02.440 15:12:48 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:02.440 15:12:48 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:02.440 15:12:48 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:02.440 15:12:48 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:02.440 15:12:48 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:02.440 15:12:48 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:02.440 15:12:48 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:02.440 15:12:48 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:02.440 15:12:48 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:02.440 15:12:48 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:02.440 15:12:48 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:02.440 15:12:48 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:02.440 15:12:48 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:02.440 15:12:48 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:02.440 15:12:48 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:02.440 15:12:48 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:02.440 15:12:48 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:02.440 15:12:48 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:02.440 15:12:48 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:02.440 15:12:48 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:02.440 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.440 --rc genhtml_branch_coverage=1 00:05:02.440 --rc genhtml_function_coverage=1 00:05:02.440 --rc genhtml_legend=1 00:05:02.440 --rc geninfo_all_blocks=1 00:05:02.440 --rc geninfo_unexecuted_blocks=1 00:05:02.440 00:05:02.440 ' 00:05:02.440 15:12:48 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:02.440 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.440 --rc genhtml_branch_coverage=1 00:05:02.440 --rc genhtml_function_coverage=1 00:05:02.440 --rc genhtml_legend=1 00:05:02.440 --rc geninfo_all_blocks=1 00:05:02.440 --rc geninfo_unexecuted_blocks=1 00:05:02.440 00:05:02.440 ' 00:05:02.440 15:12:48 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:02.440 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.440 --rc genhtml_branch_coverage=1 00:05:02.440 --rc genhtml_function_coverage=1 00:05:02.440 --rc genhtml_legend=1 00:05:02.440 --rc geninfo_all_blocks=1 00:05:02.440 --rc geninfo_unexecuted_blocks=1 00:05:02.440 00:05:02.440 ' 00:05:02.440 15:12:48 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:02.440 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.440 --rc genhtml_branch_coverage=1 00:05:02.440 --rc genhtml_function_coverage=1 00:05:02.440 --rc genhtml_legend=1 00:05:02.440 --rc geninfo_all_blocks=1 00:05:02.440 --rc geninfo_unexecuted_blocks=1 00:05:02.440 00:05:02.440 ' 00:05:02.440 15:12:48 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:02.440 15:12:48 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=57860 00:05:02.440 15:12:48 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:02.440 15:12:48 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 57860 00:05:02.440 15:12:48 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 57860 ']' 00:05:02.440 15:12:48 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:02.440 15:12:48 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:02.440 15:12:48 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:02.440 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:02.440 15:12:48 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:02.440 15:12:48 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:02.700 [2024-11-20 15:12:48.986371] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:05:02.700 [2024-11-20 15:12:48.986686] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57860 ] 00:05:02.700 [2024-11-20 15:12:49.168369] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:02.958 [2024-11-20 15:12:49.286286] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:03.898 15:12:50 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:03.898 15:12:50 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:05:03.898 15:12:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:03.898 15:12:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:03.898 15:12:50 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:03.898 15:12:50 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:03.898 { 00:05:03.898 "filename": "/tmp/spdk_mem_dump.txt" 00:05:03.898 } 00:05:03.898 15:12:50 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:03.898 15:12:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:03.898 DPDK memory size 824.000000 MiB in 1 heap(s) 00:05:03.898 1 heaps totaling size 824.000000 MiB 00:05:03.898 size: 824.000000 MiB heap id: 0 00:05:03.898 end heaps---------- 00:05:03.898 9 mempools totaling size 603.782043 MiB 00:05:03.898 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:03.898 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:03.898 size: 100.555481 MiB name: bdev_io_57860 00:05:03.898 size: 50.003479 MiB name: msgpool_57860 00:05:03.898 size: 36.509338 MiB name: fsdev_io_57860 00:05:03.898 size: 21.763794 MiB name: PDU_Pool 00:05:03.898 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:03.898 size: 4.133484 MiB name: evtpool_57860 00:05:03.898 size: 0.026123 MiB name: Session_Pool 00:05:03.898 end mempools------- 00:05:03.898 6 memzones totaling size 4.142822 MiB 00:05:03.898 size: 1.000366 MiB name: RG_ring_0_57860 00:05:03.898 size: 1.000366 MiB name: RG_ring_1_57860 00:05:03.898 size: 1.000366 MiB name: RG_ring_4_57860 00:05:03.898 size: 1.000366 MiB name: RG_ring_5_57860 00:05:03.898 size: 0.125366 MiB name: RG_ring_2_57860 00:05:03.898 size: 0.015991 MiB name: RG_ring_3_57860 00:05:03.898 end memzones------- 00:05:03.898 15:12:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:05:03.898 heap id: 0 total size: 824.000000 MiB number of busy elements: 322 number of free elements: 18 00:05:03.898 list of free elements. size: 16.779663 MiB 00:05:03.898 element at address: 0x200006400000 with size: 1.995972 MiB 00:05:03.898 element at address: 0x20000a600000 with size: 1.995972 MiB 00:05:03.898 element at address: 0x200003e00000 with size: 1.991028 MiB 00:05:03.898 element at address: 0x200019500040 with size: 0.999939 MiB 00:05:03.898 element at address: 0x200019900040 with size: 0.999939 MiB 00:05:03.898 element at address: 0x200019a00000 with size: 0.999084 MiB 00:05:03.898 element at address: 0x200032600000 with size: 0.994324 MiB 00:05:03.898 element at address: 0x200000400000 with size: 0.992004 MiB 00:05:03.898 element at address: 0x200019200000 with size: 0.959656 MiB 00:05:03.898 element at address: 0x200019d00040 with size: 0.936401 MiB 00:05:03.898 element at address: 0x200000200000 with size: 0.716980 MiB 00:05:03.898 element at address: 0x20001b400000 with size: 0.560974 MiB 00:05:03.898 element at address: 0x200000c00000 with size: 0.489197 MiB 00:05:03.898 element at address: 0x200019600000 with size: 0.488220 MiB 00:05:03.898 element at address: 0x200019e00000 with size: 0.485413 MiB 00:05:03.898 element at address: 0x200012c00000 with size: 0.433228 MiB 00:05:03.898 element at address: 0x200028800000 with size: 0.390442 MiB 00:05:03.898 element at address: 0x200000800000 with size: 0.350891 MiB 00:05:03.898 list of standard malloc elements. size: 199.289429 MiB 00:05:03.898 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:05:03.898 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:05:03.898 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:05:03.898 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:05:03.898 element at address: 0x200019bfff80 with size: 1.000183 MiB 00:05:03.898 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:05:03.898 element at address: 0x200019deff40 with size: 0.062683 MiB 00:05:03.898 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:05:03.898 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:05:03.898 element at address: 0x200019defdc0 with size: 0.000366 MiB 00:05:03.898 element at address: 0x200012bff040 with size: 0.000305 MiB 00:05:03.898 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:05:03.898 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:05:03.898 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:05:03.898 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:05:03.898 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:05:03.898 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:05:03.898 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:05:03.898 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:05:03.898 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:05:03.898 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:05:03.898 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:05:03.898 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:05:03.898 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:05:03.898 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:05:03.898 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:05:03.898 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:05:03.898 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:05:03.898 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:05:03.898 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:05:03.898 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:05:03.898 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:05:03.898 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:05:03.898 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:05:03.898 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:05:03.898 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:05:03.898 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:05:03.898 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:05:03.898 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:05:03.898 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:05:03.898 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:05:03.898 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:05:03.898 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:05:03.898 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:05:03.898 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:05:03.898 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:05:03.898 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:05:03.898 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:05:03.898 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:05:03.898 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:05:03.898 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:05:03.898 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:05:03.898 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:05:03.898 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:05:03.898 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:05:03.898 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:05:03.898 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:05:03.898 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:05:03.898 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:05:03.898 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:05:03.898 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:05:03.898 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:05:03.898 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:05:03.898 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:05:03.898 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:05:03.898 element at address: 0x200000c7d3c0 with size: 0.000244 MiB 00:05:03.898 element at address: 0x200000c7d4c0 with size: 0.000244 MiB 00:05:03.898 element at address: 0x200000c7d5c0 with size: 0.000244 MiB 00:05:03.898 element at address: 0x200000c7d6c0 with size: 0.000244 MiB 00:05:03.899 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:05:03.899 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:05:03.899 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:05:03.899 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:05:03.899 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:05:03.899 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:05:03.899 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:05:03.899 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:05:03.899 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:05:03.899 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:05:03.899 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:05:03.899 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:05:03.899 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:05:03.899 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:05:03.899 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:05:03.899 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:05:03.899 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:05:03.899 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:05:03.899 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:05:03.899 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:05:03.899 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:05:03.899 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:05:03.899 element at address: 0x200000cff000 with size: 0.000244 MiB 00:05:03.899 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:05:03.899 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:05:03.899 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:05:03.899 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:05:03.899 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:05:03.899 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:05:03.899 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:05:03.899 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:05:03.899 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:05:03.899 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:05:03.899 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:05:03.899 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:05:03.899 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:05:03.899 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:05:03.899 element at address: 0x200012bff180 with size: 0.000244 MiB 00:05:03.899 element at address: 0x200012bff280 with size: 0.000244 MiB 00:05:03.899 element at address: 0x200012bff380 with size: 0.000244 MiB 00:05:03.899 element at address: 0x200012bff480 with size: 0.000244 MiB 00:05:03.899 element at address: 0x200012bff580 with size: 0.000244 MiB 00:05:03.899 element at address: 0x200012bff680 with size: 0.000244 MiB 00:05:03.899 element at address: 0x200012bff780 with size: 0.000244 MiB 00:05:03.899 element at address: 0x200012bff880 with size: 0.000244 MiB 00:05:03.899 element at address: 0x200012bff980 with size: 0.000244 MiB 00:05:03.899 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:05:03.899 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:05:03.899 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:05:03.899 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:05:03.899 element at address: 0x200012c6ee80 with size: 0.000244 MiB 00:05:03.899 element at address: 0x200012c6ef80 with size: 0.000244 MiB 00:05:03.899 element at address: 0x200012c6f080 with size: 0.000244 MiB 00:05:03.899 element at address: 0x200012c6f180 with size: 0.000244 MiB 00:05:03.899 element at address: 0x200012c6f280 with size: 0.000244 MiB 00:05:03.899 element at address: 0x200012c6f380 with size: 0.000244 MiB 00:05:03.899 element at address: 0x200012c6f480 with size: 0.000244 MiB 00:05:03.899 element at address: 0x200012c6f580 with size: 0.000244 MiB 00:05:03.899 element at address: 0x200012c6f680 with size: 0.000244 MiB 00:05:03.899 element at address: 0x200012c6f780 with size: 0.000244 MiB 00:05:03.899 element at address: 0x200012c6f880 with size: 0.000244 MiB 00:05:03.899 element at address: 0x200012cefbc0 with size: 0.000244 MiB 00:05:03.899 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:05:03.899 element at address: 0x20001967cfc0 with size: 0.000244 MiB 00:05:03.899 element at address: 0x20001967d0c0 with size: 0.000244 MiB 00:05:03.899 element at address: 0x20001967d1c0 with size: 0.000244 MiB 00:05:03.899 element at address: 0x20001967d2c0 with size: 0.000244 MiB 00:05:03.899 element at address: 0x20001967d3c0 with size: 0.000244 MiB 00:05:03.899 element at address: 0x20001967d4c0 with size: 0.000244 MiB 00:05:03.899 element at address: 0x20001967d5c0 with size: 0.000244 MiB 00:05:03.899 element at address: 0x20001967d6c0 with size: 0.000244 MiB 00:05:03.899 element at address: 0x20001967d7c0 with size: 0.000244 MiB 00:05:03.899 element at address: 0x20001967d8c0 with size: 0.000244 MiB 00:05:03.899 element at address: 0x20001967d9c0 with size: 0.000244 MiB 00:05:03.899 element at address: 0x2000196fdd00 with size: 0.000244 MiB 00:05:03.899 element at address: 0x200019affc40 with size: 0.000244 MiB 00:05:03.899 element at address: 0x200019defbc0 with size: 0.000244 MiB 00:05:03.899 element at address: 0x200019defcc0 with size: 0.000244 MiB 00:05:03.899 element at address: 0x200019ebc680 with size: 0.000244 MiB 00:05:03.899 element at address: 0x20001b48f9c0 with size: 0.000244 MiB 00:05:03.899 element at address: 0x20001b48fac0 with size: 0.000244 MiB 00:05:03.899 element at address: 0x20001b48fbc0 with size: 0.000244 MiB 00:05:03.899 element at address: 0x20001b48fcc0 with size: 0.000244 MiB 00:05:03.899 element at address: 0x20001b48fdc0 with size: 0.000244 MiB 00:05:03.899 element at address: 0x20001b48fec0 with size: 0.000244 MiB 00:05:03.899 element at address: 0x20001b48ffc0 with size: 0.000244 MiB 00:05:03.899 element at address: 0x20001b4900c0 with size: 0.000244 MiB 00:05:03.899 element at address: 0x20001b4901c0 with size: 0.000244 MiB 00:05:03.899 element at address: 0x20001b4902c0 with size: 0.000244 MiB 00:05:03.899 element at address: 0x20001b4903c0 with size: 0.000244 MiB 00:05:03.899 element at address: 0x20001b4904c0 with size: 0.000244 MiB 00:05:03.899 element at address: 0x20001b4905c0 with size: 0.000244 MiB 00:05:03.899 element at address: 0x20001b4906c0 with size: 0.000244 MiB 00:05:03.899 element at address: 0x20001b4907c0 with size: 0.000244 MiB 00:05:03.899 element at address: 0x20001b4908c0 with size: 0.000244 MiB 00:05:03.899 element at address: 0x20001b4909c0 with size: 0.000244 MiB 00:05:03.899 element at address: 0x20001b490ac0 with size: 0.000244 MiB 00:05:03.899 element at address: 0x20001b490bc0 with size: 0.000244 MiB 00:05:03.899 element at address: 0x20001b490cc0 with size: 0.000244 MiB 00:05:03.899 element at address: 0x20001b490dc0 with size: 0.000244 MiB 00:05:03.899 element at address: 0x20001b490ec0 with size: 0.000244 MiB 00:05:03.899 element at address: 0x20001b490fc0 with size: 0.000244 MiB 00:05:03.899 element at address: 0x20001b4910c0 with size: 0.000244 MiB 00:05:03.899 element at address: 0x20001b4911c0 with size: 0.000244 MiB 00:05:03.899 element at address: 0x20001b4912c0 with size: 0.000244 MiB 00:05:03.899 element at address: 0x20001b4913c0 with size: 0.000244 MiB 00:05:03.899 element at address: 0x20001b4914c0 with size: 0.000244 MiB 00:05:03.899 element at address: 0x20001b4915c0 with size: 0.000244 MiB 00:05:03.899 element at address: 0x20001b4916c0 with size: 0.000244 MiB 00:05:03.899 element at address: 0x20001b4917c0 with size: 0.000244 MiB 00:05:03.899 element at address: 0x20001b4918c0 with size: 0.000244 MiB 00:05:03.899 element at address: 0x20001b4919c0 with size: 0.000244 MiB 00:05:03.899 element at address: 0x20001b491ac0 with size: 0.000244 MiB 00:05:03.899 element at address: 0x20001b491bc0 with size: 0.000244 MiB 00:05:03.899 element at address: 0x20001b491cc0 with size: 0.000244 MiB 00:05:03.899 element at address: 0x20001b491dc0 with size: 0.000244 MiB 00:05:03.899 element at address: 0x20001b491ec0 with size: 0.000244 MiB 00:05:03.899 element at address: 0x20001b491fc0 with size: 0.000244 MiB 00:05:03.899 element at address: 0x20001b4920c0 with size: 0.000244 MiB 00:05:03.899 element at address: 0x20001b4921c0 with size: 0.000244 MiB 00:05:03.899 element at address: 0x20001b4922c0 with size: 0.000244 MiB 00:05:03.899 element at address: 0x20001b4923c0 with size: 0.000244 MiB 00:05:03.899 element at address: 0x20001b4924c0 with size: 0.000244 MiB 00:05:03.899 element at address: 0x20001b4925c0 with size: 0.000244 MiB 00:05:03.899 element at address: 0x20001b4926c0 with size: 0.000244 MiB 00:05:03.899 element at address: 0x20001b4927c0 with size: 0.000244 MiB 00:05:03.899 element at address: 0x20001b4928c0 with size: 0.000244 MiB 00:05:03.899 element at address: 0x20001b4929c0 with size: 0.000244 MiB 00:05:03.899 element at address: 0x20001b492ac0 with size: 0.000244 MiB 00:05:03.899 element at address: 0x20001b492bc0 with size: 0.000244 MiB 00:05:03.899 element at address: 0x20001b492cc0 with size: 0.000244 MiB 00:05:03.899 element at address: 0x20001b492dc0 with size: 0.000244 MiB 00:05:03.899 element at address: 0x20001b492ec0 with size: 0.000244 MiB 00:05:03.899 element at address: 0x20001b492fc0 with size: 0.000244 MiB 00:05:03.899 element at address: 0x20001b4930c0 with size: 0.000244 MiB 00:05:03.899 element at address: 0x20001b4931c0 with size: 0.000244 MiB 00:05:03.899 element at address: 0x20001b4932c0 with size: 0.000244 MiB 00:05:03.899 element at address: 0x20001b4933c0 with size: 0.000244 MiB 00:05:03.899 element at address: 0x20001b4934c0 with size: 0.000244 MiB 00:05:03.899 element at address: 0x20001b4935c0 with size: 0.000244 MiB 00:05:03.899 element at address: 0x20001b4936c0 with size: 0.000244 MiB 00:05:03.899 element at address: 0x20001b4937c0 with size: 0.000244 MiB 00:05:03.899 element at address: 0x20001b4938c0 with size: 0.000244 MiB 00:05:03.899 element at address: 0x20001b4939c0 with size: 0.000244 MiB 00:05:03.899 element at address: 0x20001b493ac0 with size: 0.000244 MiB 00:05:03.899 element at address: 0x20001b493bc0 with size: 0.000244 MiB 00:05:03.899 element at address: 0x20001b493cc0 with size: 0.000244 MiB 00:05:03.899 element at address: 0x20001b493dc0 with size: 0.000244 MiB 00:05:03.899 element at address: 0x20001b493ec0 with size: 0.000244 MiB 00:05:03.899 element at address: 0x20001b493fc0 with size: 0.000244 MiB 00:05:03.899 element at address: 0x20001b4940c0 with size: 0.000244 MiB 00:05:03.899 element at address: 0x20001b4941c0 with size: 0.000244 MiB 00:05:03.900 element at address: 0x20001b4942c0 with size: 0.000244 MiB 00:05:03.900 element at address: 0x20001b4943c0 with size: 0.000244 MiB 00:05:03.900 element at address: 0x20001b4944c0 with size: 0.000244 MiB 00:05:03.900 element at address: 0x20001b4945c0 with size: 0.000244 MiB 00:05:03.900 element at address: 0x20001b4946c0 with size: 0.000244 MiB 00:05:03.900 element at address: 0x20001b4947c0 with size: 0.000244 MiB 00:05:03.900 element at address: 0x20001b4948c0 with size: 0.000244 MiB 00:05:03.900 element at address: 0x20001b4949c0 with size: 0.000244 MiB 00:05:03.900 element at address: 0x20001b494ac0 with size: 0.000244 MiB 00:05:03.900 element at address: 0x20001b494bc0 with size: 0.000244 MiB 00:05:03.900 element at address: 0x20001b494cc0 with size: 0.000244 MiB 00:05:03.900 element at address: 0x20001b494dc0 with size: 0.000244 MiB 00:05:03.900 element at address: 0x20001b494ec0 with size: 0.000244 MiB 00:05:03.900 element at address: 0x20001b494fc0 with size: 0.000244 MiB 00:05:03.900 element at address: 0x20001b4950c0 with size: 0.000244 MiB 00:05:03.900 element at address: 0x20001b4951c0 with size: 0.000244 MiB 00:05:03.900 element at address: 0x20001b4952c0 with size: 0.000244 MiB 00:05:03.900 element at address: 0x20001b4953c0 with size: 0.000244 MiB 00:05:03.900 element at address: 0x200028863f40 with size: 0.000244 MiB 00:05:03.900 element at address: 0x200028864040 with size: 0.000244 MiB 00:05:03.900 element at address: 0x20002886ad00 with size: 0.000244 MiB 00:05:03.900 element at address: 0x20002886af80 with size: 0.000244 MiB 00:05:03.900 element at address: 0x20002886b080 with size: 0.000244 MiB 00:05:03.900 element at address: 0x20002886b180 with size: 0.000244 MiB 00:05:03.900 element at address: 0x20002886b280 with size: 0.000244 MiB 00:05:03.900 element at address: 0x20002886b380 with size: 0.000244 MiB 00:05:03.900 element at address: 0x20002886b480 with size: 0.000244 MiB 00:05:03.900 element at address: 0x20002886b580 with size: 0.000244 MiB 00:05:03.900 element at address: 0x20002886b680 with size: 0.000244 MiB 00:05:03.900 element at address: 0x20002886b780 with size: 0.000244 MiB 00:05:03.900 element at address: 0x20002886b880 with size: 0.000244 MiB 00:05:03.900 element at address: 0x20002886b980 with size: 0.000244 MiB 00:05:03.900 element at address: 0x20002886ba80 with size: 0.000244 MiB 00:05:03.900 element at address: 0x20002886bb80 with size: 0.000244 MiB 00:05:03.900 element at address: 0x20002886bc80 with size: 0.000244 MiB 00:05:03.900 element at address: 0x20002886bd80 with size: 0.000244 MiB 00:05:03.900 element at address: 0x20002886be80 with size: 0.000244 MiB 00:05:03.900 element at address: 0x20002886bf80 with size: 0.000244 MiB 00:05:03.900 element at address: 0x20002886c080 with size: 0.000244 MiB 00:05:03.900 element at address: 0x20002886c180 with size: 0.000244 MiB 00:05:03.900 element at address: 0x20002886c280 with size: 0.000244 MiB 00:05:03.900 element at address: 0x20002886c380 with size: 0.000244 MiB 00:05:03.900 element at address: 0x20002886c480 with size: 0.000244 MiB 00:05:03.900 element at address: 0x20002886c580 with size: 0.000244 MiB 00:05:03.900 element at address: 0x20002886c680 with size: 0.000244 MiB 00:05:03.900 element at address: 0x20002886c780 with size: 0.000244 MiB 00:05:03.900 element at address: 0x20002886c880 with size: 0.000244 MiB 00:05:03.900 element at address: 0x20002886c980 with size: 0.000244 MiB 00:05:03.900 element at address: 0x20002886ca80 with size: 0.000244 MiB 00:05:03.900 element at address: 0x20002886cb80 with size: 0.000244 MiB 00:05:03.900 element at address: 0x20002886cc80 with size: 0.000244 MiB 00:05:03.900 element at address: 0x20002886cd80 with size: 0.000244 MiB 00:05:03.900 element at address: 0x20002886ce80 with size: 0.000244 MiB 00:05:03.900 element at address: 0x20002886cf80 with size: 0.000244 MiB 00:05:03.900 element at address: 0x20002886d080 with size: 0.000244 MiB 00:05:03.900 element at address: 0x20002886d180 with size: 0.000244 MiB 00:05:03.900 element at address: 0x20002886d280 with size: 0.000244 MiB 00:05:03.900 element at address: 0x20002886d380 with size: 0.000244 MiB 00:05:03.900 element at address: 0x20002886d480 with size: 0.000244 MiB 00:05:03.900 element at address: 0x20002886d580 with size: 0.000244 MiB 00:05:03.900 element at address: 0x20002886d680 with size: 0.000244 MiB 00:05:03.900 element at address: 0x20002886d780 with size: 0.000244 MiB 00:05:03.900 element at address: 0x20002886d880 with size: 0.000244 MiB 00:05:03.900 element at address: 0x20002886d980 with size: 0.000244 MiB 00:05:03.900 element at address: 0x20002886da80 with size: 0.000244 MiB 00:05:03.900 element at address: 0x20002886db80 with size: 0.000244 MiB 00:05:03.900 element at address: 0x20002886dc80 with size: 0.000244 MiB 00:05:03.900 element at address: 0x20002886dd80 with size: 0.000244 MiB 00:05:03.900 element at address: 0x20002886de80 with size: 0.000244 MiB 00:05:03.900 element at address: 0x20002886df80 with size: 0.000244 MiB 00:05:03.900 element at address: 0x20002886e080 with size: 0.000244 MiB 00:05:03.900 element at address: 0x20002886e180 with size: 0.000244 MiB 00:05:03.900 element at address: 0x20002886e280 with size: 0.000244 MiB 00:05:03.900 element at address: 0x20002886e380 with size: 0.000244 MiB 00:05:03.900 element at address: 0x20002886e480 with size: 0.000244 MiB 00:05:03.900 element at address: 0x20002886e580 with size: 0.000244 MiB 00:05:03.900 element at address: 0x20002886e680 with size: 0.000244 MiB 00:05:03.900 element at address: 0x20002886e780 with size: 0.000244 MiB 00:05:03.900 element at address: 0x20002886e880 with size: 0.000244 MiB 00:05:03.900 element at address: 0x20002886e980 with size: 0.000244 MiB 00:05:03.900 element at address: 0x20002886ea80 with size: 0.000244 MiB 00:05:03.900 element at address: 0x20002886eb80 with size: 0.000244 MiB 00:05:03.900 element at address: 0x20002886ec80 with size: 0.000244 MiB 00:05:03.900 element at address: 0x20002886ed80 with size: 0.000244 MiB 00:05:03.900 element at address: 0x20002886ee80 with size: 0.000244 MiB 00:05:03.900 element at address: 0x20002886ef80 with size: 0.000244 MiB 00:05:03.900 element at address: 0x20002886f080 with size: 0.000244 MiB 00:05:03.900 element at address: 0x20002886f180 with size: 0.000244 MiB 00:05:03.900 element at address: 0x20002886f280 with size: 0.000244 MiB 00:05:03.900 element at address: 0x20002886f380 with size: 0.000244 MiB 00:05:03.900 element at address: 0x20002886f480 with size: 0.000244 MiB 00:05:03.900 element at address: 0x20002886f580 with size: 0.000244 MiB 00:05:03.900 element at address: 0x20002886f680 with size: 0.000244 MiB 00:05:03.900 element at address: 0x20002886f780 with size: 0.000244 MiB 00:05:03.900 element at address: 0x20002886f880 with size: 0.000244 MiB 00:05:03.900 element at address: 0x20002886f980 with size: 0.000244 MiB 00:05:03.900 element at address: 0x20002886fa80 with size: 0.000244 MiB 00:05:03.900 element at address: 0x20002886fb80 with size: 0.000244 MiB 00:05:03.900 element at address: 0x20002886fc80 with size: 0.000244 MiB 00:05:03.900 element at address: 0x20002886fd80 with size: 0.000244 MiB 00:05:03.900 element at address: 0x20002886fe80 with size: 0.000244 MiB 00:05:03.900 list of memzone associated elements. size: 607.930908 MiB 00:05:03.900 element at address: 0x20001b4954c0 with size: 211.416809 MiB 00:05:03.900 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:03.900 element at address: 0x20002886ff80 with size: 157.562622 MiB 00:05:03.900 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:03.900 element at address: 0x200012df1e40 with size: 100.055115 MiB 00:05:03.900 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_57860_0 00:05:03.900 element at address: 0x200000dff340 with size: 48.003113 MiB 00:05:03.900 associated memzone info: size: 48.002930 MiB name: MP_msgpool_57860_0 00:05:03.900 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:05:03.900 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_57860_0 00:05:03.900 element at address: 0x200019fbe900 with size: 20.255615 MiB 00:05:03.900 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:03.900 element at address: 0x2000327feb00 with size: 18.005127 MiB 00:05:03.900 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:03.900 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:05:03.900 associated memzone info: size: 3.000122 MiB name: MP_evtpool_57860_0 00:05:03.900 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:05:03.900 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_57860 00:05:03.900 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:05:03.900 associated memzone info: size: 1.007996 MiB name: MP_evtpool_57860 00:05:03.900 element at address: 0x2000196fde00 with size: 1.008179 MiB 00:05:03.900 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:03.900 element at address: 0x200019ebc780 with size: 1.008179 MiB 00:05:03.900 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:03.900 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:05:03.900 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:03.900 element at address: 0x200012cefcc0 with size: 1.008179 MiB 00:05:03.900 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:03.900 element at address: 0x200000cff100 with size: 1.000549 MiB 00:05:03.900 associated memzone info: size: 1.000366 MiB name: RG_ring_0_57860 00:05:03.900 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:05:03.900 associated memzone info: size: 1.000366 MiB name: RG_ring_1_57860 00:05:03.900 element at address: 0x200019affd40 with size: 1.000549 MiB 00:05:03.900 associated memzone info: size: 1.000366 MiB name: RG_ring_4_57860 00:05:03.900 element at address: 0x2000326fe8c0 with size: 1.000549 MiB 00:05:03.900 associated memzone info: size: 1.000366 MiB name: RG_ring_5_57860 00:05:03.900 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:05:03.900 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_57860 00:05:03.900 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:05:03.900 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_57860 00:05:03.900 element at address: 0x20001967dac0 with size: 0.500549 MiB 00:05:03.900 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:03.900 element at address: 0x200012c6f980 with size: 0.500549 MiB 00:05:03.900 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:03.900 element at address: 0x200019e7c440 with size: 0.250549 MiB 00:05:03.900 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:03.900 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:05:03.900 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_57860 00:05:03.901 element at address: 0x20000085df80 with size: 0.125549 MiB 00:05:03.901 associated memzone info: size: 0.125366 MiB name: RG_ring_2_57860 00:05:03.901 element at address: 0x2000192f5ac0 with size: 0.031799 MiB 00:05:03.901 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:03.901 element at address: 0x200028864140 with size: 0.023804 MiB 00:05:03.901 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:03.901 element at address: 0x200000859d40 with size: 0.016174 MiB 00:05:03.901 associated memzone info: size: 0.015991 MiB name: RG_ring_3_57860 00:05:03.901 element at address: 0x20002886a2c0 with size: 0.002502 MiB 00:05:03.901 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:03.901 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:05:03.901 associated memzone info: size: 0.000183 MiB name: MP_msgpool_57860 00:05:03.901 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:05:03.901 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_57860 00:05:03.901 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:05:03.901 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_57860 00:05:03.901 element at address: 0x20002886ae00 with size: 0.000366 MiB 00:05:03.901 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:03.901 15:12:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:03.901 15:12:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 57860 00:05:03.901 15:12:50 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 57860 ']' 00:05:03.901 15:12:50 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 57860 00:05:03.901 15:12:50 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:05:03.901 15:12:50 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:03.901 15:12:50 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57860 00:05:03.901 killing process with pid 57860 00:05:03.901 15:12:50 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:03.901 15:12:50 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:03.901 15:12:50 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57860' 00:05:03.901 15:12:50 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 57860 00:05:03.901 15:12:50 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 57860 00:05:06.438 00:05:06.438 real 0m4.084s 00:05:06.438 user 0m3.956s 00:05:06.438 sys 0m0.610s 00:05:06.438 15:12:52 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:06.438 ************************************ 00:05:06.438 END TEST dpdk_mem_utility 00:05:06.438 15:12:52 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:06.438 ************************************ 00:05:06.438 15:12:52 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:06.438 15:12:52 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:06.438 15:12:52 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:06.438 15:12:52 -- common/autotest_common.sh@10 -- # set +x 00:05:06.438 ************************************ 00:05:06.438 START TEST event 00:05:06.438 ************************************ 00:05:06.438 15:12:52 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:06.438 * Looking for test storage... 00:05:06.697 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:06.697 15:12:52 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:06.697 15:12:52 event -- common/autotest_common.sh@1693 -- # lcov --version 00:05:06.697 15:12:52 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:06.697 15:12:52 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:06.697 15:12:52 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:06.697 15:12:52 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:06.697 15:12:52 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:06.697 15:12:52 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:06.697 15:12:53 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:06.697 15:12:53 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:06.697 15:12:53 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:06.697 15:12:53 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:06.697 15:12:53 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:06.697 15:12:53 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:06.697 15:12:53 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:06.697 15:12:53 event -- scripts/common.sh@344 -- # case "$op" in 00:05:06.697 15:12:53 event -- scripts/common.sh@345 -- # : 1 00:05:06.697 15:12:53 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:06.697 15:12:53 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:06.697 15:12:53 event -- scripts/common.sh@365 -- # decimal 1 00:05:06.697 15:12:53 event -- scripts/common.sh@353 -- # local d=1 00:05:06.697 15:12:53 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:06.697 15:12:53 event -- scripts/common.sh@355 -- # echo 1 00:05:06.697 15:12:53 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:06.697 15:12:53 event -- scripts/common.sh@366 -- # decimal 2 00:05:06.697 15:12:53 event -- scripts/common.sh@353 -- # local d=2 00:05:06.697 15:12:53 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:06.697 15:12:53 event -- scripts/common.sh@355 -- # echo 2 00:05:06.697 15:12:53 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:06.697 15:12:53 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:06.697 15:12:53 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:06.697 15:12:53 event -- scripts/common.sh@368 -- # return 0 00:05:06.697 15:12:53 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:06.697 15:12:53 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:06.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:06.697 --rc genhtml_branch_coverage=1 00:05:06.698 --rc genhtml_function_coverage=1 00:05:06.698 --rc genhtml_legend=1 00:05:06.698 --rc geninfo_all_blocks=1 00:05:06.698 --rc geninfo_unexecuted_blocks=1 00:05:06.698 00:05:06.698 ' 00:05:06.698 15:12:53 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:06.698 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:06.698 --rc genhtml_branch_coverage=1 00:05:06.698 --rc genhtml_function_coverage=1 00:05:06.698 --rc genhtml_legend=1 00:05:06.698 --rc geninfo_all_blocks=1 00:05:06.698 --rc geninfo_unexecuted_blocks=1 00:05:06.698 00:05:06.698 ' 00:05:06.698 15:12:53 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:06.698 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:06.698 --rc genhtml_branch_coverage=1 00:05:06.698 --rc genhtml_function_coverage=1 00:05:06.698 --rc genhtml_legend=1 00:05:06.698 --rc geninfo_all_blocks=1 00:05:06.698 --rc geninfo_unexecuted_blocks=1 00:05:06.698 00:05:06.698 ' 00:05:06.698 15:12:53 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:06.698 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:06.698 --rc genhtml_branch_coverage=1 00:05:06.698 --rc genhtml_function_coverage=1 00:05:06.698 --rc genhtml_legend=1 00:05:06.698 --rc geninfo_all_blocks=1 00:05:06.698 --rc geninfo_unexecuted_blocks=1 00:05:06.698 00:05:06.698 ' 00:05:06.698 15:12:53 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:06.698 15:12:53 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:06.698 15:12:53 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:06.698 15:12:53 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:05:06.698 15:12:53 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:06.698 15:12:53 event -- common/autotest_common.sh@10 -- # set +x 00:05:06.698 ************************************ 00:05:06.698 START TEST event_perf 00:05:06.698 ************************************ 00:05:06.698 15:12:53 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:06.698 Running I/O for 1 seconds...[2024-11-20 15:12:53.088331] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:05:06.698 [2024-11-20 15:12:53.088547] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57968 ] 00:05:06.957 [2024-11-20 15:12:53.271762] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:06.957 [2024-11-20 15:12:53.392907] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:06.957 [2024-11-20 15:12:53.393088] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:06.957 [2024-11-20 15:12:53.393205] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:06.957 [2024-11-20 15:12:53.393241] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:08.378 Running I/O for 1 seconds... 00:05:08.378 lcore 0: 204228 00:05:08.378 lcore 1: 204231 00:05:08.378 lcore 2: 204224 00:05:08.378 lcore 3: 204225 00:05:08.378 done. 00:05:08.378 00:05:08.378 real 0m1.618s 00:05:08.378 user 0m4.344s 00:05:08.378 sys 0m0.144s 00:05:08.378 15:12:54 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:08.378 15:12:54 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:08.378 ************************************ 00:05:08.378 END TEST event_perf 00:05:08.378 ************************************ 00:05:08.378 15:12:54 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:08.378 15:12:54 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:08.378 15:12:54 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:08.378 15:12:54 event -- common/autotest_common.sh@10 -- # set +x 00:05:08.378 ************************************ 00:05:08.378 START TEST event_reactor 00:05:08.378 ************************************ 00:05:08.378 15:12:54 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:08.378 [2024-11-20 15:12:54.776534] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:05:08.378 [2024-11-20 15:12:54.777178] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58013 ] 00:05:08.636 [2024-11-20 15:12:54.976427] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:08.636 [2024-11-20 15:12:55.099928] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:10.013 test_start 00:05:10.013 oneshot 00:05:10.013 tick 100 00:05:10.013 tick 100 00:05:10.013 tick 250 00:05:10.013 tick 100 00:05:10.013 tick 100 00:05:10.013 tick 250 00:05:10.013 tick 100 00:05:10.013 tick 500 00:05:10.013 tick 100 00:05:10.013 tick 100 00:05:10.013 tick 250 00:05:10.013 tick 100 00:05:10.013 tick 100 00:05:10.013 test_end 00:05:10.013 00:05:10.013 real 0m1.609s 00:05:10.013 user 0m1.375s 00:05:10.013 sys 0m0.126s 00:05:10.013 15:12:56 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:10.013 ************************************ 00:05:10.013 END TEST event_reactor 00:05:10.013 ************************************ 00:05:10.013 15:12:56 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:10.013 15:12:56 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:10.013 15:12:56 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:10.013 15:12:56 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:10.013 15:12:56 event -- common/autotest_common.sh@10 -- # set +x 00:05:10.013 ************************************ 00:05:10.013 START TEST event_reactor_perf 00:05:10.013 ************************************ 00:05:10.013 15:12:56 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:10.013 [2024-11-20 15:12:56.457627] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:05:10.013 [2024-11-20 15:12:56.457751] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58050 ] 00:05:10.272 [2024-11-20 15:12:56.640028] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:10.530 [2024-11-20 15:12:56.761214] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:11.530 test_start 00:05:11.530 test_end 00:05:11.530 Performance: 374598 events per second 00:05:11.530 ************************************ 00:05:11.530 END TEST event_reactor_perf 00:05:11.530 ************************************ 00:05:11.530 00:05:11.530 real 0m1.582s 00:05:11.530 user 0m1.371s 00:05:11.530 sys 0m0.102s 00:05:11.530 15:12:57 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:11.530 15:12:57 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:11.789 15:12:58 event -- event/event.sh@49 -- # uname -s 00:05:11.789 15:12:58 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:11.789 15:12:58 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:11.789 15:12:58 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:11.789 15:12:58 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:11.789 15:12:58 event -- common/autotest_common.sh@10 -- # set +x 00:05:11.789 ************************************ 00:05:11.789 START TEST event_scheduler 00:05:11.789 ************************************ 00:05:11.789 15:12:58 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:11.789 * Looking for test storage... 00:05:11.789 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:05:11.789 15:12:58 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:11.789 15:12:58 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:05:11.789 15:12:58 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:12.049 15:12:58 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:12.049 15:12:58 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:12.049 15:12:58 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:12.049 15:12:58 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:12.049 15:12:58 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:12.049 15:12:58 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:12.049 15:12:58 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:12.049 15:12:58 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:12.049 15:12:58 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:12.049 15:12:58 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:12.049 15:12:58 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:12.049 15:12:58 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:12.049 15:12:58 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:12.049 15:12:58 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:12.049 15:12:58 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:12.049 15:12:58 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:12.049 15:12:58 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:12.049 15:12:58 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:12.049 15:12:58 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:12.049 15:12:58 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:12.049 15:12:58 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:12.049 15:12:58 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:12.049 15:12:58 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:12.049 15:12:58 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:12.049 15:12:58 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:12.049 15:12:58 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:12.049 15:12:58 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:12.049 15:12:58 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:12.049 15:12:58 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:12.049 15:12:58 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:12.049 15:12:58 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:12.049 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.049 --rc genhtml_branch_coverage=1 00:05:12.049 --rc genhtml_function_coverage=1 00:05:12.049 --rc genhtml_legend=1 00:05:12.049 --rc geninfo_all_blocks=1 00:05:12.049 --rc geninfo_unexecuted_blocks=1 00:05:12.049 00:05:12.049 ' 00:05:12.049 15:12:58 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:12.049 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.049 --rc genhtml_branch_coverage=1 00:05:12.049 --rc genhtml_function_coverage=1 00:05:12.049 --rc genhtml_legend=1 00:05:12.049 --rc geninfo_all_blocks=1 00:05:12.049 --rc geninfo_unexecuted_blocks=1 00:05:12.049 00:05:12.049 ' 00:05:12.049 15:12:58 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:12.049 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.049 --rc genhtml_branch_coverage=1 00:05:12.049 --rc genhtml_function_coverage=1 00:05:12.049 --rc genhtml_legend=1 00:05:12.049 --rc geninfo_all_blocks=1 00:05:12.049 --rc geninfo_unexecuted_blocks=1 00:05:12.049 00:05:12.049 ' 00:05:12.049 15:12:58 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:12.049 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.049 --rc genhtml_branch_coverage=1 00:05:12.049 --rc genhtml_function_coverage=1 00:05:12.049 --rc genhtml_legend=1 00:05:12.049 --rc geninfo_all_blocks=1 00:05:12.049 --rc geninfo_unexecuted_blocks=1 00:05:12.049 00:05:12.049 ' 00:05:12.049 15:12:58 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:12.049 15:12:58 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=58120 00:05:12.049 15:12:58 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:12.049 15:12:58 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:12.049 15:12:58 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 58120 00:05:12.049 15:12:58 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 58120 ']' 00:05:12.049 15:12:58 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:12.049 15:12:58 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:12.049 15:12:58 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:12.049 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:12.049 15:12:58 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:12.049 15:12:58 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:12.049 [2024-11-20 15:12:58.386703] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:05:12.049 [2024-11-20 15:12:58.386836] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58120 ] 00:05:12.307 [2024-11-20 15:12:58.568215] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:12.307 [2024-11-20 15:12:58.691773] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:12.307 [2024-11-20 15:12:58.691892] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:12.307 [2024-11-20 15:12:58.691991] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:12.307 [2024-11-20 15:12:58.692021] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:12.876 15:12:59 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:12.876 15:12:59 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:05:12.876 15:12:59 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:12.876 15:12:59 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:12.876 15:12:59 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:12.876 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:12.876 POWER: Cannot set governor of lcore 0 to userspace 00:05:12.876 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:12.876 POWER: Cannot set governor of lcore 0 to performance 00:05:12.876 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:12.876 POWER: Cannot set governor of lcore 0 to userspace 00:05:12.876 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:12.876 POWER: Cannot set governor of lcore 0 to userspace 00:05:12.876 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:05:12.876 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:05:12.876 POWER: Unable to set Power Management Environment for lcore 0 00:05:12.876 [2024-11-20 15:12:59.229107] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:05:12.876 [2024-11-20 15:12:59.229136] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:05:12.876 [2024-11-20 15:12:59.229151] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:12.876 [2024-11-20 15:12:59.229178] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:12.876 [2024-11-20 15:12:59.229191] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:12.876 [2024-11-20 15:12:59.229206] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:12.876 15:12:59 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:12.876 15:12:59 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:12.876 15:12:59 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:12.876 15:12:59 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:13.134 [2024-11-20 15:12:59.561960] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:13.134 15:12:59 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:13.134 15:12:59 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:13.134 15:12:59 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:13.134 15:12:59 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:13.134 15:12:59 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:13.134 ************************************ 00:05:13.134 START TEST scheduler_create_thread 00:05:13.134 ************************************ 00:05:13.134 15:12:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:05:13.134 15:12:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:13.134 15:12:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:13.134 15:12:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:13.134 2 00:05:13.134 15:12:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:13.134 15:12:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:13.134 15:12:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:13.134 15:12:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:13.134 3 00:05:13.134 15:12:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:13.134 15:12:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:13.134 15:12:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:13.134 15:12:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:13.134 4 00:05:13.134 15:12:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:13.134 15:12:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:13.134 15:12:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:13.134 15:12:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:13.394 5 00:05:13.394 15:12:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:13.394 15:12:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:13.394 15:12:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:13.394 15:12:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:13.394 6 00:05:13.394 15:12:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:13.394 15:12:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:13.394 15:12:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:13.394 15:12:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:13.394 7 00:05:13.394 15:12:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:13.394 15:12:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:13.394 15:12:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:13.394 15:12:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:13.394 8 00:05:13.394 15:12:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:13.394 15:12:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:13.394 15:12:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:13.394 15:12:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:13.394 9 00:05:13.394 15:12:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:13.394 15:12:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:13.394 15:12:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:13.394 15:12:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:13.394 10 00:05:13.394 15:12:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:13.394 15:12:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:13.394 15:12:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:13.394 15:12:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:13.394 15:12:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:13.394 15:12:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:13.394 15:12:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:13.394 15:12:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:13.394 15:12:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:13.965 15:13:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:13.965 15:13:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:13.965 15:13:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:13.965 15:13:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:15.348 15:13:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:15.348 15:13:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:15.348 15:13:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:15.348 15:13:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:15.348 15:13:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:16.286 ************************************ 00:05:16.286 END TEST scheduler_create_thread 00:05:16.286 ************************************ 00:05:16.286 15:13:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:16.286 00:05:16.286 real 0m3.099s 00:05:16.286 user 0m0.022s 00:05:16.286 sys 0m0.008s 00:05:16.286 15:13:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:16.286 15:13:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:16.286 15:13:02 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:16.286 15:13:02 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 58120 00:05:16.286 15:13:02 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 58120 ']' 00:05:16.286 15:13:02 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 58120 00:05:16.286 15:13:02 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:05:16.286 15:13:02 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:16.286 15:13:02 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58120 00:05:16.545 killing process with pid 58120 00:05:16.545 15:13:02 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:16.545 15:13:02 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:16.545 15:13:02 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58120' 00:05:16.545 15:13:02 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 58120 00:05:16.545 15:13:02 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 58120 00:05:16.804 [2024-11-20 15:13:03.058621] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:18.182 00:05:18.182 real 0m6.197s 00:05:18.182 user 0m12.384s 00:05:18.183 sys 0m0.562s 00:05:18.183 15:13:04 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:18.183 15:13:04 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:18.183 ************************************ 00:05:18.183 END TEST event_scheduler 00:05:18.183 ************************************ 00:05:18.183 15:13:04 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:18.183 15:13:04 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:18.183 15:13:04 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:18.183 15:13:04 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:18.183 15:13:04 event -- common/autotest_common.sh@10 -- # set +x 00:05:18.183 ************************************ 00:05:18.183 START TEST app_repeat 00:05:18.183 ************************************ 00:05:18.183 15:13:04 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:05:18.183 15:13:04 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:18.183 15:13:04 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:18.183 15:13:04 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:18.183 15:13:04 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:18.183 15:13:04 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:18.183 15:13:04 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:18.183 15:13:04 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:18.183 15:13:04 event.app_repeat -- event/event.sh@19 -- # repeat_pid=58237 00:05:18.183 15:13:04 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:18.183 15:13:04 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:18.183 Process app_repeat pid: 58237 00:05:18.183 15:13:04 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 58237' 00:05:18.183 spdk_app_start Round 0 00:05:18.183 15:13:04 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:18.183 15:13:04 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:18.183 15:13:04 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58237 /var/tmp/spdk-nbd.sock 00:05:18.183 15:13:04 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58237 ']' 00:05:18.183 15:13:04 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:18.183 15:13:04 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:18.183 15:13:04 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:18.183 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:18.183 15:13:04 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:18.183 15:13:04 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:18.183 [2024-11-20 15:13:04.417384] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:05:18.183 [2024-11-20 15:13:04.417727] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58237 ] 00:05:18.183 [2024-11-20 15:13:04.599720] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:18.442 [2024-11-20 15:13:04.724187] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.442 [2024-11-20 15:13:04.724222] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:19.010 15:13:05 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:19.010 15:13:05 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:19.010 15:13:05 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:19.269 Malloc0 00:05:19.269 15:13:05 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:19.528 Malloc1 00:05:19.528 15:13:05 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:19.528 15:13:05 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:19.528 15:13:05 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:19.528 15:13:05 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:19.528 15:13:05 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:19.528 15:13:05 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:19.528 15:13:05 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:19.528 15:13:05 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:19.528 15:13:05 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:19.528 15:13:05 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:19.528 15:13:05 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:19.528 15:13:05 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:19.528 15:13:05 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:19.528 15:13:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:19.528 15:13:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:19.528 15:13:05 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:19.787 /dev/nbd0 00:05:19.787 15:13:06 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:19.787 15:13:06 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:19.787 15:13:06 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:19.787 15:13:06 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:19.787 15:13:06 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:19.787 15:13:06 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:19.787 15:13:06 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:19.787 15:13:06 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:19.787 15:13:06 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:19.787 15:13:06 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:19.787 15:13:06 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:19.787 1+0 records in 00:05:19.787 1+0 records out 00:05:19.787 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000675323 s, 6.1 MB/s 00:05:19.787 15:13:06 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:19.787 15:13:06 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:19.787 15:13:06 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:19.787 15:13:06 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:19.787 15:13:06 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:19.787 15:13:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:19.787 15:13:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:19.787 15:13:06 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:20.046 /dev/nbd1 00:05:20.046 15:13:06 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:20.046 15:13:06 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:20.046 15:13:06 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:20.046 15:13:06 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:20.046 15:13:06 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:20.046 15:13:06 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:20.046 15:13:06 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:20.046 15:13:06 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:20.046 15:13:06 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:20.046 15:13:06 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:20.046 15:13:06 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:20.046 1+0 records in 00:05:20.046 1+0 records out 00:05:20.046 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000390647 s, 10.5 MB/s 00:05:20.046 15:13:06 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:20.046 15:13:06 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:20.046 15:13:06 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:20.046 15:13:06 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:20.046 15:13:06 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:20.046 15:13:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:20.046 15:13:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:20.046 15:13:06 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:20.046 15:13:06 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:20.046 15:13:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:20.305 15:13:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:20.305 { 00:05:20.305 "nbd_device": "/dev/nbd0", 00:05:20.305 "bdev_name": "Malloc0" 00:05:20.305 }, 00:05:20.305 { 00:05:20.305 "nbd_device": "/dev/nbd1", 00:05:20.305 "bdev_name": "Malloc1" 00:05:20.305 } 00:05:20.305 ]' 00:05:20.305 15:13:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:20.305 { 00:05:20.305 "nbd_device": "/dev/nbd0", 00:05:20.305 "bdev_name": "Malloc0" 00:05:20.305 }, 00:05:20.305 { 00:05:20.305 "nbd_device": "/dev/nbd1", 00:05:20.305 "bdev_name": "Malloc1" 00:05:20.305 } 00:05:20.305 ]' 00:05:20.305 15:13:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:20.305 15:13:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:20.305 /dev/nbd1' 00:05:20.305 15:13:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:20.305 /dev/nbd1' 00:05:20.305 15:13:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:20.305 15:13:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:20.305 15:13:06 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:20.305 15:13:06 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:20.305 15:13:06 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:20.305 15:13:06 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:20.305 15:13:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:20.305 15:13:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:20.305 15:13:06 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:20.305 15:13:06 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:20.305 15:13:06 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:20.305 15:13:06 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:20.305 256+0 records in 00:05:20.305 256+0 records out 00:05:20.305 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00618417 s, 170 MB/s 00:05:20.305 15:13:06 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:20.305 15:13:06 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:20.305 256+0 records in 00:05:20.305 256+0 records out 00:05:20.305 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0282481 s, 37.1 MB/s 00:05:20.305 15:13:06 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:20.305 15:13:06 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:20.305 256+0 records in 00:05:20.305 256+0 records out 00:05:20.305 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0318752 s, 32.9 MB/s 00:05:20.305 15:13:06 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:20.305 15:13:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:20.305 15:13:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:20.305 15:13:06 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:20.305 15:13:06 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:20.305 15:13:06 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:20.305 15:13:06 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:20.305 15:13:06 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:20.305 15:13:06 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:20.305 15:13:06 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:20.305 15:13:06 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:20.305 15:13:06 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:20.305 15:13:06 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:20.305 15:13:06 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:20.305 15:13:06 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:20.305 15:13:06 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:20.305 15:13:06 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:20.305 15:13:06 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:20.305 15:13:06 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:20.564 15:13:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:20.564 15:13:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:20.564 15:13:07 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:20.564 15:13:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:20.564 15:13:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:20.564 15:13:07 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:20.564 15:13:07 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:20.564 15:13:07 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:20.564 15:13:07 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:20.564 15:13:07 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:20.823 15:13:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:20.823 15:13:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:20.823 15:13:07 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:20.823 15:13:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:20.823 15:13:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:20.823 15:13:07 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:20.823 15:13:07 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:20.823 15:13:07 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:20.823 15:13:07 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:20.823 15:13:07 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:20.823 15:13:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:21.082 15:13:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:21.082 15:13:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:21.082 15:13:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:21.082 15:13:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:21.082 15:13:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:21.082 15:13:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:21.083 15:13:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:21.083 15:13:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:21.083 15:13:07 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:21.083 15:13:07 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:21.083 15:13:07 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:21.083 15:13:07 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:21.083 15:13:07 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:21.651 15:13:07 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:23.028 [2024-11-20 15:13:09.111549] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:23.028 [2024-11-20 15:13:09.223908] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:23.028 [2024-11-20 15:13:09.223908] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.028 [2024-11-20 15:13:09.418003] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:23.028 [2024-11-20 15:13:09.418061] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:24.983 spdk_app_start Round 1 00:05:24.983 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:24.983 15:13:10 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:24.983 15:13:10 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:24.983 15:13:10 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58237 /var/tmp/spdk-nbd.sock 00:05:24.983 15:13:10 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58237 ']' 00:05:24.983 15:13:10 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:24.983 15:13:10 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:24.983 15:13:10 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:24.983 15:13:10 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:24.983 15:13:10 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:24.983 15:13:11 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:24.983 15:13:11 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:24.983 15:13:11 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:24.983 Malloc0 00:05:24.983 15:13:11 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:25.242 Malloc1 00:05:25.502 15:13:11 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:25.502 15:13:11 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:25.502 15:13:11 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:25.502 15:13:11 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:25.502 15:13:11 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:25.502 15:13:11 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:25.502 15:13:11 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:25.502 15:13:11 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:25.502 15:13:11 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:25.502 15:13:11 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:25.502 15:13:11 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:25.502 15:13:11 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:25.502 15:13:11 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:25.502 15:13:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:25.502 15:13:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:25.502 15:13:11 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:25.502 /dev/nbd0 00:05:25.502 15:13:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:25.502 15:13:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:25.502 15:13:11 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:25.502 15:13:11 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:25.502 15:13:11 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:25.502 15:13:11 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:25.502 15:13:11 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:25.502 15:13:11 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:25.502 15:13:11 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:25.502 15:13:11 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:25.502 15:13:11 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:25.502 1+0 records in 00:05:25.502 1+0 records out 00:05:25.502 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000301053 s, 13.6 MB/s 00:05:25.502 15:13:11 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:25.502 15:13:11 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:25.502 15:13:11 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:25.761 15:13:11 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:25.761 15:13:11 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:25.761 15:13:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:25.761 15:13:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:25.761 15:13:11 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:25.761 /dev/nbd1 00:05:25.761 15:13:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:25.761 15:13:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:25.761 15:13:12 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:25.761 15:13:12 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:25.761 15:13:12 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:25.761 15:13:12 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:25.761 15:13:12 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:25.761 15:13:12 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:25.761 15:13:12 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:25.761 15:13:12 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:25.761 15:13:12 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:25.761 1+0 records in 00:05:25.761 1+0 records out 00:05:25.761 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000283122 s, 14.5 MB/s 00:05:25.761 15:13:12 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:25.761 15:13:12 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:25.761 15:13:12 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:25.761 15:13:12 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:25.761 15:13:12 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:25.761 15:13:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:25.761 15:13:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:25.761 15:13:12 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:25.761 15:13:12 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:25.761 15:13:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:26.021 15:13:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:26.021 { 00:05:26.021 "nbd_device": "/dev/nbd0", 00:05:26.021 "bdev_name": "Malloc0" 00:05:26.021 }, 00:05:26.021 { 00:05:26.021 "nbd_device": "/dev/nbd1", 00:05:26.021 "bdev_name": "Malloc1" 00:05:26.021 } 00:05:26.021 ]' 00:05:26.021 15:13:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:26.021 { 00:05:26.021 "nbd_device": "/dev/nbd0", 00:05:26.021 "bdev_name": "Malloc0" 00:05:26.021 }, 00:05:26.021 { 00:05:26.021 "nbd_device": "/dev/nbd1", 00:05:26.021 "bdev_name": "Malloc1" 00:05:26.021 } 00:05:26.021 ]' 00:05:26.021 15:13:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:26.021 15:13:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:26.021 /dev/nbd1' 00:05:26.021 15:13:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:26.021 /dev/nbd1' 00:05:26.021 15:13:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:26.021 15:13:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:26.021 15:13:12 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:26.282 15:13:12 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:26.282 15:13:12 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:26.282 15:13:12 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:26.282 15:13:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:26.282 15:13:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:26.282 15:13:12 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:26.282 15:13:12 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:26.282 15:13:12 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:26.282 15:13:12 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:26.282 256+0 records in 00:05:26.282 256+0 records out 00:05:26.282 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00538558 s, 195 MB/s 00:05:26.282 15:13:12 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:26.282 15:13:12 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:26.282 256+0 records in 00:05:26.282 256+0 records out 00:05:26.282 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0254106 s, 41.3 MB/s 00:05:26.282 15:13:12 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:26.282 15:13:12 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:26.282 256+0 records in 00:05:26.282 256+0 records out 00:05:26.282 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0310171 s, 33.8 MB/s 00:05:26.282 15:13:12 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:26.282 15:13:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:26.282 15:13:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:26.282 15:13:12 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:26.282 15:13:12 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:26.282 15:13:12 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:26.282 15:13:12 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:26.282 15:13:12 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:26.282 15:13:12 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:26.282 15:13:12 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:26.282 15:13:12 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:26.282 15:13:12 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:26.282 15:13:12 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:26.282 15:13:12 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:26.282 15:13:12 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:26.282 15:13:12 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:26.282 15:13:12 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:26.282 15:13:12 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:26.282 15:13:12 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:26.541 15:13:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:26.541 15:13:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:26.541 15:13:12 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:26.541 15:13:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:26.541 15:13:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:26.541 15:13:12 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:26.541 15:13:12 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:26.541 15:13:12 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:26.541 15:13:12 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:26.541 15:13:12 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:26.799 15:13:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:26.799 15:13:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:26.799 15:13:13 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:26.799 15:13:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:26.799 15:13:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:26.799 15:13:13 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:26.799 15:13:13 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:26.799 15:13:13 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:26.799 15:13:13 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:26.799 15:13:13 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:26.799 15:13:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:27.062 15:13:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:27.062 15:13:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:27.062 15:13:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:27.062 15:13:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:27.062 15:13:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:27.062 15:13:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:27.062 15:13:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:27.062 15:13:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:27.062 15:13:13 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:27.062 15:13:13 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:27.062 15:13:13 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:27.062 15:13:13 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:27.062 15:13:13 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:27.322 15:13:13 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:28.700 [2024-11-20 15:13:14.924779] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:28.700 [2024-11-20 15:13:15.038464] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.700 [2024-11-20 15:13:15.038483] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:28.959 [2024-11-20 15:13:15.229977] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:28.959 [2024-11-20 15:13:15.230074] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:30.336 spdk_app_start Round 2 00:05:30.336 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:30.336 15:13:16 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:30.336 15:13:16 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:30.336 15:13:16 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58237 /var/tmp/spdk-nbd.sock 00:05:30.336 15:13:16 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58237 ']' 00:05:30.336 15:13:16 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:30.336 15:13:16 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:30.336 15:13:16 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:30.336 15:13:16 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:30.336 15:13:16 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:30.596 15:13:16 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:30.596 15:13:16 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:30.596 15:13:16 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:30.855 Malloc0 00:05:30.855 15:13:17 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:31.114 Malloc1 00:05:31.114 15:13:17 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:31.114 15:13:17 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:31.114 15:13:17 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:31.114 15:13:17 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:31.114 15:13:17 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:31.114 15:13:17 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:31.114 15:13:17 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:31.114 15:13:17 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:31.114 15:13:17 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:31.114 15:13:17 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:31.114 15:13:17 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:31.114 15:13:17 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:31.114 15:13:17 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:31.114 15:13:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:31.114 15:13:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:31.114 15:13:17 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:31.373 /dev/nbd0 00:05:31.373 15:13:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:31.374 15:13:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:31.374 15:13:17 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:31.374 15:13:17 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:31.374 15:13:17 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:31.374 15:13:17 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:31.374 15:13:17 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:31.374 15:13:17 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:31.374 15:13:17 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:31.374 15:13:17 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:31.374 15:13:17 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:31.374 1+0 records in 00:05:31.374 1+0 records out 00:05:31.374 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000392179 s, 10.4 MB/s 00:05:31.374 15:13:17 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:31.374 15:13:17 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:31.374 15:13:17 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:31.374 15:13:17 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:31.374 15:13:17 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:31.374 15:13:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:31.374 15:13:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:31.374 15:13:17 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:31.633 /dev/nbd1 00:05:31.633 15:13:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:31.633 15:13:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:31.633 15:13:18 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:31.633 15:13:18 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:31.633 15:13:18 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:31.633 15:13:18 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:31.633 15:13:18 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:31.633 15:13:18 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:31.633 15:13:18 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:31.633 15:13:18 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:31.633 15:13:18 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:31.633 1+0 records in 00:05:31.633 1+0 records out 00:05:31.633 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00026652 s, 15.4 MB/s 00:05:31.633 15:13:18 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:31.633 15:13:18 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:31.633 15:13:18 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:31.633 15:13:18 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:31.633 15:13:18 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:31.633 15:13:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:31.633 15:13:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:31.633 15:13:18 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:31.633 15:13:18 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:31.633 15:13:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:31.893 15:13:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:31.893 { 00:05:31.893 "nbd_device": "/dev/nbd0", 00:05:31.893 "bdev_name": "Malloc0" 00:05:31.893 }, 00:05:31.893 { 00:05:31.893 "nbd_device": "/dev/nbd1", 00:05:31.893 "bdev_name": "Malloc1" 00:05:31.893 } 00:05:31.893 ]' 00:05:31.893 15:13:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:31.893 { 00:05:31.893 "nbd_device": "/dev/nbd0", 00:05:31.893 "bdev_name": "Malloc0" 00:05:31.893 }, 00:05:31.893 { 00:05:31.893 "nbd_device": "/dev/nbd1", 00:05:31.893 "bdev_name": "Malloc1" 00:05:31.893 } 00:05:31.893 ]' 00:05:31.893 15:13:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:31.893 15:13:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:31.893 /dev/nbd1' 00:05:31.893 15:13:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:31.893 /dev/nbd1' 00:05:31.893 15:13:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:31.893 15:13:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:31.893 15:13:18 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:31.893 15:13:18 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:31.893 15:13:18 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:31.893 15:13:18 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:31.893 15:13:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:31.893 15:13:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:31.893 15:13:18 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:31.893 15:13:18 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:31.893 15:13:18 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:31.893 15:13:18 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:31.893 256+0 records in 00:05:31.893 256+0 records out 00:05:31.893 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0126406 s, 83.0 MB/s 00:05:31.893 15:13:18 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:31.893 15:13:18 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:32.152 256+0 records in 00:05:32.152 256+0 records out 00:05:32.152 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0264608 s, 39.6 MB/s 00:05:32.152 15:13:18 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:32.152 15:13:18 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:32.152 256+0 records in 00:05:32.152 256+0 records out 00:05:32.152 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0332853 s, 31.5 MB/s 00:05:32.152 15:13:18 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:32.152 15:13:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:32.152 15:13:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:32.152 15:13:18 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:32.152 15:13:18 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:32.152 15:13:18 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:32.152 15:13:18 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:32.152 15:13:18 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:32.152 15:13:18 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:32.152 15:13:18 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:32.152 15:13:18 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:32.152 15:13:18 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:32.152 15:13:18 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:32.152 15:13:18 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:32.152 15:13:18 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:32.152 15:13:18 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:32.152 15:13:18 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:32.152 15:13:18 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:32.152 15:13:18 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:32.411 15:13:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:32.411 15:13:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:32.411 15:13:18 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:32.411 15:13:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:32.411 15:13:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:32.411 15:13:18 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:32.411 15:13:18 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:32.411 15:13:18 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:32.411 15:13:18 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:32.411 15:13:18 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:32.670 15:13:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:32.670 15:13:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:32.670 15:13:18 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:32.670 15:13:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:32.670 15:13:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:32.670 15:13:18 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:32.670 15:13:18 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:32.670 15:13:18 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:32.670 15:13:18 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:32.670 15:13:18 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:32.670 15:13:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:32.928 15:13:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:32.928 15:13:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:32.928 15:13:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:32.928 15:13:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:32.928 15:13:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:32.928 15:13:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:32.928 15:13:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:32.928 15:13:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:32.928 15:13:19 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:32.928 15:13:19 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:32.928 15:13:19 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:32.928 15:13:19 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:32.928 15:13:19 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:33.188 15:13:19 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:34.590 [2024-11-20 15:13:20.788162] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:34.590 [2024-11-20 15:13:20.902396] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:34.590 [2024-11-20 15:13:20.902397] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.849 [2024-11-20 15:13:21.099568] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:34.849 [2024-11-20 15:13:21.099635] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:36.226 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:36.226 15:13:22 event.app_repeat -- event/event.sh@38 -- # waitforlisten 58237 /var/tmp/spdk-nbd.sock 00:05:36.226 15:13:22 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58237 ']' 00:05:36.226 15:13:22 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:36.226 15:13:22 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:36.226 15:13:22 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:36.226 15:13:22 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:36.226 15:13:22 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:36.486 15:13:22 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:36.486 15:13:22 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:36.486 15:13:22 event.app_repeat -- event/event.sh@39 -- # killprocess 58237 00:05:36.486 15:13:22 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 58237 ']' 00:05:36.486 15:13:22 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 58237 00:05:36.486 15:13:22 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:05:36.486 15:13:22 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:36.486 15:13:22 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58237 00:05:36.486 15:13:22 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:36.486 15:13:22 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:36.486 15:13:22 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58237' 00:05:36.486 killing process with pid 58237 00:05:36.486 15:13:22 event.app_repeat -- common/autotest_common.sh@973 -- # kill 58237 00:05:36.486 15:13:22 event.app_repeat -- common/autotest_common.sh@978 -- # wait 58237 00:05:37.864 spdk_app_start is called in Round 0. 00:05:37.864 Shutdown signal received, stop current app iteration 00:05:37.864 Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 reinitialization... 00:05:37.864 spdk_app_start is called in Round 1. 00:05:37.864 Shutdown signal received, stop current app iteration 00:05:37.864 Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 reinitialization... 00:05:37.864 spdk_app_start is called in Round 2. 00:05:37.864 Shutdown signal received, stop current app iteration 00:05:37.864 Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 reinitialization... 00:05:37.864 spdk_app_start is called in Round 3. 00:05:37.864 Shutdown signal received, stop current app iteration 00:05:37.864 15:13:23 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:37.864 15:13:23 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:37.864 00:05:37.864 real 0m19.634s 00:05:37.864 user 0m41.863s 00:05:37.864 sys 0m3.176s 00:05:37.864 15:13:23 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:37.864 15:13:23 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:37.864 ************************************ 00:05:37.864 END TEST app_repeat 00:05:37.864 ************************************ 00:05:37.864 15:13:24 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:37.864 15:13:24 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:37.864 15:13:24 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:37.864 15:13:24 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:37.864 15:13:24 event -- common/autotest_common.sh@10 -- # set +x 00:05:37.864 ************************************ 00:05:37.864 START TEST cpu_locks 00:05:37.864 ************************************ 00:05:37.864 15:13:24 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:37.864 * Looking for test storage... 00:05:37.864 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:37.864 15:13:24 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:37.864 15:13:24 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:05:37.864 15:13:24 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:37.864 15:13:24 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:37.864 15:13:24 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:37.864 15:13:24 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:37.864 15:13:24 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:37.864 15:13:24 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:05:37.864 15:13:24 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:05:37.864 15:13:24 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:05:37.864 15:13:24 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:05:37.864 15:13:24 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:05:37.864 15:13:24 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:05:37.864 15:13:24 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:05:37.864 15:13:24 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:37.864 15:13:24 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:05:37.864 15:13:24 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:05:37.864 15:13:24 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:37.864 15:13:24 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:37.864 15:13:24 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:05:37.864 15:13:24 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:05:37.864 15:13:24 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:37.864 15:13:24 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:05:37.864 15:13:24 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:05:37.864 15:13:24 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:05:37.864 15:13:24 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:05:37.864 15:13:24 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:37.864 15:13:24 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:05:37.865 15:13:24 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:05:37.865 15:13:24 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:37.865 15:13:24 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:37.865 15:13:24 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:05:37.865 15:13:24 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:37.865 15:13:24 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:37.865 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.865 --rc genhtml_branch_coverage=1 00:05:37.865 --rc genhtml_function_coverage=1 00:05:37.865 --rc genhtml_legend=1 00:05:37.865 --rc geninfo_all_blocks=1 00:05:37.865 --rc geninfo_unexecuted_blocks=1 00:05:37.865 00:05:37.865 ' 00:05:37.865 15:13:24 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:37.865 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.865 --rc genhtml_branch_coverage=1 00:05:37.865 --rc genhtml_function_coverage=1 00:05:37.865 --rc genhtml_legend=1 00:05:37.865 --rc geninfo_all_blocks=1 00:05:37.865 --rc geninfo_unexecuted_blocks=1 00:05:37.865 00:05:37.865 ' 00:05:37.865 15:13:24 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:37.865 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.865 --rc genhtml_branch_coverage=1 00:05:37.865 --rc genhtml_function_coverage=1 00:05:37.865 --rc genhtml_legend=1 00:05:37.865 --rc geninfo_all_blocks=1 00:05:37.865 --rc geninfo_unexecuted_blocks=1 00:05:37.865 00:05:37.865 ' 00:05:37.865 15:13:24 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:37.865 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.865 --rc genhtml_branch_coverage=1 00:05:37.865 --rc genhtml_function_coverage=1 00:05:37.865 --rc genhtml_legend=1 00:05:37.865 --rc geninfo_all_blocks=1 00:05:37.865 --rc geninfo_unexecuted_blocks=1 00:05:37.865 00:05:37.865 ' 00:05:37.865 15:13:24 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:37.865 15:13:24 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:37.865 15:13:24 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:37.865 15:13:24 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:37.865 15:13:24 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:37.865 15:13:24 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:37.865 15:13:24 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:37.865 ************************************ 00:05:37.865 START TEST default_locks 00:05:37.865 ************************************ 00:05:37.865 15:13:24 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:05:37.865 15:13:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=58690 00:05:37.865 15:13:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:37.865 15:13:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 58690 00:05:37.865 15:13:24 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58690 ']' 00:05:37.865 15:13:24 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:37.865 15:13:24 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:37.865 15:13:24 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:37.865 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:37.865 15:13:24 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:37.865 15:13:24 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:38.124 [2024-11-20 15:13:24.421280] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:05:38.124 [2024-11-20 15:13:24.421411] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58690 ] 00:05:38.124 [2024-11-20 15:13:24.599945] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:38.383 [2024-11-20 15:13:24.721257] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.354 15:13:25 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:39.354 15:13:25 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:05:39.354 15:13:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 58690 00:05:39.354 15:13:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 58690 00:05:39.354 15:13:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:39.918 15:13:26 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 58690 00:05:39.918 15:13:26 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 58690 ']' 00:05:39.918 15:13:26 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 58690 00:05:39.918 15:13:26 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:05:39.918 15:13:26 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:39.918 15:13:26 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58690 00:05:39.918 killing process with pid 58690 00:05:39.918 15:13:26 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:39.918 15:13:26 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:39.918 15:13:26 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58690' 00:05:39.918 15:13:26 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 58690 00:05:39.918 15:13:26 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 58690 00:05:42.444 15:13:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 58690 00:05:42.444 15:13:28 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:05:42.444 15:13:28 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 58690 00:05:42.444 15:13:28 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:42.444 15:13:28 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:42.444 15:13:28 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:42.444 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:42.444 ERROR: process (pid: 58690) is no longer running 00:05:42.444 ************************************ 00:05:42.444 END TEST default_locks 00:05:42.444 ************************************ 00:05:42.444 15:13:28 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:42.444 15:13:28 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 58690 00:05:42.444 15:13:28 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58690 ']' 00:05:42.444 15:13:28 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:42.444 15:13:28 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:42.444 15:13:28 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:42.444 15:13:28 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:42.444 15:13:28 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:42.444 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (58690) - No such process 00:05:42.444 15:13:28 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:42.444 15:13:28 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:05:42.444 15:13:28 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:05:42.444 15:13:28 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:42.444 15:13:28 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:42.444 15:13:28 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:42.444 15:13:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:42.444 15:13:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:42.444 15:13:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:42.444 15:13:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:42.444 00:05:42.444 real 0m4.331s 00:05:42.444 user 0m4.335s 00:05:42.444 sys 0m0.711s 00:05:42.444 15:13:28 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:42.444 15:13:28 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:42.444 15:13:28 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:42.444 15:13:28 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:42.444 15:13:28 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:42.444 15:13:28 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:42.444 ************************************ 00:05:42.444 START TEST default_locks_via_rpc 00:05:42.444 ************************************ 00:05:42.444 15:13:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:05:42.444 15:13:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=58767 00:05:42.444 15:13:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 58767 00:05:42.444 15:13:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:42.444 15:13:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 58767 ']' 00:05:42.444 15:13:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:42.444 15:13:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:42.444 15:13:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:42.444 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:42.444 15:13:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:42.444 15:13:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:42.444 [2024-11-20 15:13:28.815068] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:05:42.444 [2024-11-20 15:13:28.815436] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58767 ] 00:05:42.704 [2024-11-20 15:13:28.999067] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:42.704 [2024-11-20 15:13:29.126765] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.640 15:13:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:43.640 15:13:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:43.640 15:13:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:43.640 15:13:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:43.640 15:13:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:43.640 15:13:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:43.640 15:13:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:43.640 15:13:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:43.640 15:13:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:43.640 15:13:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:43.640 15:13:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:43.640 15:13:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:43.640 15:13:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:43.640 15:13:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:43.640 15:13:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 58767 00:05:43.640 15:13:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 58767 00:05:43.640 15:13:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:44.208 15:13:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 58767 00:05:44.208 15:13:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 58767 ']' 00:05:44.208 15:13:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 58767 00:05:44.208 15:13:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:05:44.208 15:13:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:44.208 15:13:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58767 00:05:44.208 killing process with pid 58767 00:05:44.208 15:13:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:44.208 15:13:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:44.208 15:13:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58767' 00:05:44.208 15:13:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 58767 00:05:44.208 15:13:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 58767 00:05:46.744 00:05:46.744 real 0m4.202s 00:05:46.744 user 0m4.144s 00:05:46.744 sys 0m0.695s 00:05:46.744 15:13:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:46.744 15:13:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:46.744 ************************************ 00:05:46.744 END TEST default_locks_via_rpc 00:05:46.744 ************************************ 00:05:46.744 15:13:32 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:46.744 15:13:32 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:46.744 15:13:32 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:46.744 15:13:32 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:46.744 ************************************ 00:05:46.744 START TEST non_locking_app_on_locked_coremask 00:05:46.744 ************************************ 00:05:46.744 15:13:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:05:46.744 15:13:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=58841 00:05:46.744 15:13:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:46.744 15:13:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 58841 /var/tmp/spdk.sock 00:05:46.744 15:13:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58841 ']' 00:05:46.744 15:13:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:46.744 15:13:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:46.744 15:13:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:46.744 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:46.744 15:13:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:46.744 15:13:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:46.744 [2024-11-20 15:13:33.085123] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:05:46.744 [2024-11-20 15:13:33.085259] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58841 ] 00:05:47.004 [2024-11-20 15:13:33.265199] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.004 [2024-11-20 15:13:33.386370] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.942 15:13:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:47.943 15:13:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:47.943 15:13:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:47.943 15:13:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=58857 00:05:47.943 15:13:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 58857 /var/tmp/spdk2.sock 00:05:47.943 15:13:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58857 ']' 00:05:47.943 15:13:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:47.943 15:13:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:47.943 15:13:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:47.943 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:47.943 15:13:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:47.943 15:13:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:47.943 [2024-11-20 15:13:34.403640] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:05:47.943 [2024-11-20 15:13:34.404044] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58857 ] 00:05:48.202 [2024-11-20 15:13:34.591259] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:48.202 [2024-11-20 15:13:34.591342] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.461 [2024-11-20 15:13:34.834622] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.000 15:13:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:51.000 15:13:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:51.000 15:13:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 58841 00:05:51.000 15:13:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58841 00:05:51.000 15:13:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:51.569 15:13:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 58841 00:05:51.569 15:13:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58841 ']' 00:05:51.569 15:13:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 58841 00:05:51.569 15:13:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:51.569 15:13:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:51.569 15:13:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58841 00:05:51.569 killing process with pid 58841 00:05:51.569 15:13:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:51.569 15:13:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:51.569 15:13:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58841' 00:05:51.569 15:13:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 58841 00:05:51.569 15:13:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 58841 00:05:56.842 15:13:42 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 58857 00:05:56.842 15:13:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58857 ']' 00:05:56.842 15:13:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 58857 00:05:56.842 15:13:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:56.842 15:13:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:56.842 15:13:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58857 00:05:56.842 killing process with pid 58857 00:05:56.842 15:13:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:56.842 15:13:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:56.842 15:13:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58857' 00:05:56.842 15:13:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 58857 00:05:56.842 15:13:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 58857 00:05:59.380 00:05:59.380 real 0m12.317s 00:05:59.380 user 0m12.667s 00:05:59.380 sys 0m1.472s 00:05:59.380 15:13:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:59.380 15:13:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:59.380 ************************************ 00:05:59.380 END TEST non_locking_app_on_locked_coremask 00:05:59.380 ************************************ 00:05:59.380 15:13:45 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:59.380 15:13:45 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:59.380 15:13:45 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:59.380 15:13:45 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:59.380 ************************************ 00:05:59.380 START TEST locking_app_on_unlocked_coremask 00:05:59.380 ************************************ 00:05:59.380 15:13:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:05:59.380 15:13:45 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=59019 00:05:59.380 15:13:45 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:59.380 15:13:45 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 59019 /var/tmp/spdk.sock 00:05:59.380 15:13:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59019 ']' 00:05:59.380 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:59.380 15:13:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:59.380 15:13:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:59.380 15:13:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:59.381 15:13:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:59.381 15:13:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:59.381 [2024-11-20 15:13:45.470480] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:05:59.381 [2024-11-20 15:13:45.470622] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59019 ] 00:05:59.381 [2024-11-20 15:13:45.634232] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:59.381 [2024-11-20 15:13:45.634297] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.381 [2024-11-20 15:13:45.754679] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.316 15:13:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:00.316 15:13:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:00.316 15:13:46 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:00.316 15:13:46 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=59035 00:06:00.316 15:13:46 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 59035 /var/tmp/spdk2.sock 00:06:00.316 15:13:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59035 ']' 00:06:00.316 15:13:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:00.316 15:13:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:00.316 15:13:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:00.316 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:00.316 15:13:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:00.316 15:13:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:00.316 [2024-11-20 15:13:46.728395] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:06:00.316 [2024-11-20 15:13:46.728759] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59035 ] 00:06:00.574 [2024-11-20 15:13:46.913769] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.832 [2024-11-20 15:13:47.156898] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.362 15:13:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:03.362 15:13:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:03.362 15:13:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 59035 00:06:03.362 15:13:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59035 00:06:03.362 15:13:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:03.929 15:13:50 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 59019 00:06:03.929 15:13:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59019 ']' 00:06:03.929 15:13:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59019 00:06:03.929 15:13:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:03.929 15:13:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:03.929 15:13:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59019 00:06:03.929 killing process with pid 59019 00:06:03.929 15:13:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:03.929 15:13:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:03.929 15:13:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59019' 00:06:03.929 15:13:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59019 00:06:03.929 15:13:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59019 00:06:09.203 15:13:55 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 59035 00:06:09.203 15:13:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59035 ']' 00:06:09.203 15:13:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59035 00:06:09.203 15:13:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:09.203 15:13:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:09.203 15:13:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59035 00:06:09.203 killing process with pid 59035 00:06:09.203 15:13:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:09.203 15:13:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:09.203 15:13:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59035' 00:06:09.203 15:13:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59035 00:06:09.203 15:13:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59035 00:06:11.105 00:06:11.105 real 0m12.213s 00:06:11.105 user 0m12.498s 00:06:11.105 sys 0m1.491s 00:06:11.105 ************************************ 00:06:11.105 END TEST locking_app_on_unlocked_coremask 00:06:11.105 15:13:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:11.105 15:13:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:11.105 ************************************ 00:06:11.365 15:13:57 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:11.365 15:13:57 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:11.365 15:13:57 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:11.365 15:13:57 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:11.365 ************************************ 00:06:11.365 START TEST locking_app_on_locked_coremask 00:06:11.365 ************************************ 00:06:11.365 15:13:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:06:11.365 15:13:57 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=59189 00:06:11.365 15:13:57 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:11.365 15:13:57 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 59189 /var/tmp/spdk.sock 00:06:11.365 15:13:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59189 ']' 00:06:11.365 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:11.365 15:13:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:11.365 15:13:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:11.365 15:13:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:11.365 15:13:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:11.365 15:13:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:11.365 [2024-11-20 15:13:57.750071] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:06:11.365 [2024-11-20 15:13:57.750190] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59189 ] 00:06:11.624 [2024-11-20 15:13:57.917121] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.624 [2024-11-20 15:13:58.047138] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.561 15:13:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:12.561 15:13:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:12.561 15:13:58 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=59205 00:06:12.561 15:13:58 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 59205 /var/tmp/spdk2.sock 00:06:12.561 15:13:58 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:12.561 15:13:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:12.561 15:13:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59205 /var/tmp/spdk2.sock 00:06:12.561 15:13:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:12.561 15:13:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:12.561 15:13:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:12.561 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:12.561 15:13:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:12.561 15:13:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59205 /var/tmp/spdk2.sock 00:06:12.561 15:13:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59205 ']' 00:06:12.561 15:13:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:12.561 15:13:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:12.561 15:13:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:12.561 15:13:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:12.561 15:13:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:12.820 [2024-11-20 15:13:59.053256] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:06:12.820 [2024-11-20 15:13:59.054158] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59205 ] 00:06:12.820 [2024-11-20 15:13:59.237052] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 59189 has claimed it. 00:06:12.820 [2024-11-20 15:13:59.237117] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:13.387 ERROR: process (pid: 59205) is no longer running 00:06:13.387 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59205) - No such process 00:06:13.387 15:13:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:13.388 15:13:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:13.388 15:13:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:13.388 15:13:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:13.388 15:13:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:13.388 15:13:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:13.388 15:13:59 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 59189 00:06:13.388 15:13:59 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59189 00:06:13.388 15:13:59 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:13.956 15:14:00 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 59189 00:06:13.956 15:14:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59189 ']' 00:06:13.956 15:14:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59189 00:06:13.956 15:14:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:13.956 15:14:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:13.956 15:14:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59189 00:06:13.956 15:14:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:13.956 15:14:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:13.956 15:14:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59189' 00:06:13.956 killing process with pid 59189 00:06:13.956 15:14:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59189 00:06:13.956 15:14:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59189 00:06:16.490 00:06:16.490 real 0m5.042s 00:06:16.490 user 0m5.261s 00:06:16.490 sys 0m0.867s 00:06:16.490 15:14:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:16.490 ************************************ 00:06:16.490 END TEST locking_app_on_locked_coremask 00:06:16.490 ************************************ 00:06:16.490 15:14:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:16.490 15:14:02 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:16.490 15:14:02 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:16.490 15:14:02 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:16.490 15:14:02 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:16.490 ************************************ 00:06:16.490 START TEST locking_overlapped_coremask 00:06:16.490 ************************************ 00:06:16.490 15:14:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:06:16.490 15:14:02 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=59280 00:06:16.490 15:14:02 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:06:16.490 15:14:02 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 59280 /var/tmp/spdk.sock 00:06:16.490 15:14:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59280 ']' 00:06:16.490 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:16.490 15:14:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:16.490 15:14:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:16.490 15:14:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:16.490 15:14:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:16.490 15:14:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:16.490 [2024-11-20 15:14:02.866063] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:06:16.490 [2024-11-20 15:14:02.866190] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59280 ] 00:06:16.749 [2024-11-20 15:14:03.048436] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:16.749 [2024-11-20 15:14:03.175632] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:16.749 [2024-11-20 15:14:03.175816] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.749 [2024-11-20 15:14:03.175849] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:17.683 15:14:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:17.683 15:14:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:17.683 15:14:04 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=59298 00:06:17.683 15:14:04 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:17.683 15:14:04 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 59298 /var/tmp/spdk2.sock 00:06:17.683 15:14:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:17.683 15:14:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59298 /var/tmp/spdk2.sock 00:06:17.683 15:14:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:17.683 15:14:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:17.683 15:14:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:17.683 15:14:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:17.683 15:14:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59298 /var/tmp/spdk2.sock 00:06:17.683 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:17.683 15:14:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59298 ']' 00:06:17.683 15:14:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:17.683 15:14:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:17.683 15:14:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:17.683 15:14:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:17.683 15:14:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:17.942 [2024-11-20 15:14:04.172895] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:06:17.942 [2024-11-20 15:14:04.173288] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59298 ] 00:06:17.942 [2024-11-20 15:14:04.365975] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59280 has claimed it. 00:06:17.942 [2024-11-20 15:14:04.366104] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:18.513 ERROR: process (pid: 59298) is no longer running 00:06:18.513 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59298) - No such process 00:06:18.513 15:14:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:18.513 15:14:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:18.513 15:14:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:18.513 15:14:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:18.513 15:14:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:18.513 15:14:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:18.513 15:14:04 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:18.513 15:14:04 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:18.513 15:14:04 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:18.513 15:14:04 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:18.513 15:14:04 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 59280 00:06:18.513 15:14:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 59280 ']' 00:06:18.513 15:14:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 59280 00:06:18.513 15:14:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:06:18.513 15:14:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:18.513 15:14:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59280 00:06:18.513 15:14:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:18.513 15:14:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:18.513 15:14:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59280' 00:06:18.513 killing process with pid 59280 00:06:18.513 15:14:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 59280 00:06:18.513 15:14:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 59280 00:06:21.050 00:06:21.050 real 0m4.651s 00:06:21.050 user 0m12.701s 00:06:21.050 sys 0m0.654s 00:06:21.050 15:14:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:21.050 15:14:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:21.050 ************************************ 00:06:21.050 END TEST locking_overlapped_coremask 00:06:21.050 ************************************ 00:06:21.050 15:14:07 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:21.050 15:14:07 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:21.050 15:14:07 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:21.050 15:14:07 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:21.050 ************************************ 00:06:21.050 START TEST locking_overlapped_coremask_via_rpc 00:06:21.050 ************************************ 00:06:21.050 15:14:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:06:21.050 15:14:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=59363 00:06:21.050 15:14:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:21.051 15:14:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 59363 /var/tmp/spdk.sock 00:06:21.051 15:14:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59363 ']' 00:06:21.051 15:14:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:21.051 15:14:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:21.051 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:21.051 15:14:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:21.051 15:14:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:21.051 15:14:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:21.310 [2024-11-20 15:14:07.594230] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:06:21.310 [2024-11-20 15:14:07.594564] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59363 ] 00:06:21.310 [2024-11-20 15:14:07.778839] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:21.310 [2024-11-20 15:14:07.779050] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:21.569 [2024-11-20 15:14:07.902224] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:21.569 [2024-11-20 15:14:07.902375] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.569 [2024-11-20 15:14:07.902403] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:22.566 15:14:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:22.567 15:14:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:22.567 15:14:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=59391 00:06:22.567 15:14:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:22.567 15:14:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 59391 /var/tmp/spdk2.sock 00:06:22.567 15:14:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59391 ']' 00:06:22.567 15:14:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:22.567 15:14:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:22.567 15:14:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:22.567 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:22.567 15:14:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:22.567 15:14:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:22.567 [2024-11-20 15:14:08.913202] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:06:22.567 [2024-11-20 15:14:08.913574] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59391 ] 00:06:22.826 [2024-11-20 15:14:09.099320] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:22.826 [2024-11-20 15:14:09.099392] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:23.085 [2024-11-20 15:14:09.351369] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:23.085 [2024-11-20 15:14:09.351452] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:23.085 [2024-11-20 15:14:09.351503] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:25.621 15:14:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:25.621 15:14:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:25.621 15:14:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:25.622 15:14:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:25.622 15:14:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:25.622 15:14:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:25.622 15:14:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:25.622 15:14:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:06:25.622 15:14:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:25.622 15:14:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:06:25.622 15:14:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:25.622 15:14:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:06:25.622 15:14:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:25.622 15:14:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:25.622 15:14:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:25.622 15:14:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:25.622 [2024-11-20 15:14:11.603913] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59363 has claimed it. 00:06:25.622 request: 00:06:25.622 { 00:06:25.622 "method": "framework_enable_cpumask_locks", 00:06:25.622 "req_id": 1 00:06:25.622 } 00:06:25.622 Got JSON-RPC error response 00:06:25.622 response: 00:06:25.622 { 00:06:25.622 "code": -32603, 00:06:25.622 "message": "Failed to claim CPU core: 2" 00:06:25.622 } 00:06:25.622 15:14:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:25.622 15:14:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:06:25.622 15:14:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:25.622 15:14:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:25.622 15:14:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:25.622 15:14:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 59363 /var/tmp/spdk.sock 00:06:25.622 15:14:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59363 ']' 00:06:25.622 15:14:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:25.622 15:14:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:25.622 15:14:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:25.622 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:25.622 15:14:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:25.622 15:14:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:25.622 15:14:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:25.622 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:25.622 15:14:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:25.622 15:14:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 59391 /var/tmp/spdk2.sock 00:06:25.622 15:14:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59391 ']' 00:06:25.622 15:14:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:25.622 15:14:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:25.622 15:14:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:25.622 15:14:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:25.622 15:14:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:25.622 15:14:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:25.622 15:14:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:25.622 15:14:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:25.622 15:14:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:25.622 15:14:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:25.622 15:14:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:25.622 00:06:25.622 real 0m4.592s 00:06:25.622 user 0m1.497s 00:06:25.622 sys 0m0.268s 00:06:25.622 15:14:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:25.622 15:14:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:25.622 ************************************ 00:06:25.622 END TEST locking_overlapped_coremask_via_rpc 00:06:25.622 ************************************ 00:06:25.881 15:14:12 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:25.881 15:14:12 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59363 ]] 00:06:25.881 15:14:12 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59363 00:06:25.881 15:14:12 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59363 ']' 00:06:25.881 15:14:12 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59363 00:06:25.881 15:14:12 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:25.881 15:14:12 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:25.881 15:14:12 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59363 00:06:25.881 killing process with pid 59363 00:06:25.881 15:14:12 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:25.881 15:14:12 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:25.881 15:14:12 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59363' 00:06:25.881 15:14:12 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59363 00:06:25.881 15:14:12 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59363 00:06:28.528 15:14:14 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59391 ]] 00:06:28.528 15:14:14 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59391 00:06:28.528 15:14:14 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59391 ']' 00:06:28.528 15:14:14 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59391 00:06:28.528 15:14:14 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:28.528 15:14:14 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:28.528 15:14:14 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59391 00:06:28.528 killing process with pid 59391 00:06:28.528 15:14:14 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:28.528 15:14:14 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:28.528 15:14:14 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59391' 00:06:28.528 15:14:14 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59391 00:06:28.528 15:14:14 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59391 00:06:31.061 15:14:17 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:31.061 Process with pid 59363 is not found 00:06:31.061 Process with pid 59391 is not found 00:06:31.061 15:14:17 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:31.061 15:14:17 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59363 ]] 00:06:31.061 15:14:17 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59363 00:06:31.061 15:14:17 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59363 ']' 00:06:31.061 15:14:17 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59363 00:06:31.061 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59363) - No such process 00:06:31.061 15:14:17 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59363 is not found' 00:06:31.061 15:14:17 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59391 ]] 00:06:31.061 15:14:17 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59391 00:06:31.061 15:14:17 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59391 ']' 00:06:31.061 15:14:17 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59391 00:06:31.061 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59391) - No such process 00:06:31.061 15:14:17 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59391 is not found' 00:06:31.061 15:14:17 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:31.061 00:06:31.061 real 0m53.464s 00:06:31.061 user 1m31.595s 00:06:31.061 sys 0m7.498s 00:06:31.061 15:14:17 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:31.061 ************************************ 00:06:31.061 END TEST cpu_locks 00:06:31.061 ************************************ 00:06:31.061 15:14:17 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:31.319 ************************************ 00:06:31.319 END TEST event 00:06:31.319 ************************************ 00:06:31.319 00:06:31.319 real 1m24.795s 00:06:31.319 user 2m33.182s 00:06:31.319 sys 0m12.042s 00:06:31.319 15:14:17 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:31.319 15:14:17 event -- common/autotest_common.sh@10 -- # set +x 00:06:31.319 15:14:17 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:31.319 15:14:17 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:31.319 15:14:17 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:31.319 15:14:17 -- common/autotest_common.sh@10 -- # set +x 00:06:31.319 ************************************ 00:06:31.319 START TEST thread 00:06:31.319 ************************************ 00:06:31.319 15:14:17 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:31.319 * Looking for test storage... 00:06:31.319 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:31.319 15:14:17 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:31.319 15:14:17 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:06:31.319 15:14:17 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:31.578 15:14:17 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:31.578 15:14:17 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:31.578 15:14:17 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:31.578 15:14:17 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:31.578 15:14:17 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:31.579 15:14:17 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:31.579 15:14:17 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:31.579 15:14:17 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:31.579 15:14:17 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:31.579 15:14:17 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:31.579 15:14:17 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:31.579 15:14:17 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:31.579 15:14:17 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:31.579 15:14:17 thread -- scripts/common.sh@345 -- # : 1 00:06:31.579 15:14:17 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:31.579 15:14:17 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:31.579 15:14:17 thread -- scripts/common.sh@365 -- # decimal 1 00:06:31.579 15:14:17 thread -- scripts/common.sh@353 -- # local d=1 00:06:31.579 15:14:17 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:31.579 15:14:17 thread -- scripts/common.sh@355 -- # echo 1 00:06:31.579 15:14:17 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:31.579 15:14:17 thread -- scripts/common.sh@366 -- # decimal 2 00:06:31.579 15:14:17 thread -- scripts/common.sh@353 -- # local d=2 00:06:31.579 15:14:17 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:31.579 15:14:17 thread -- scripts/common.sh@355 -- # echo 2 00:06:31.579 15:14:17 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:31.579 15:14:17 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:31.579 15:14:17 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:31.579 15:14:17 thread -- scripts/common.sh@368 -- # return 0 00:06:31.579 15:14:17 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:31.579 15:14:17 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:31.579 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.579 --rc genhtml_branch_coverage=1 00:06:31.579 --rc genhtml_function_coverage=1 00:06:31.579 --rc genhtml_legend=1 00:06:31.579 --rc geninfo_all_blocks=1 00:06:31.579 --rc geninfo_unexecuted_blocks=1 00:06:31.579 00:06:31.579 ' 00:06:31.579 15:14:17 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:31.579 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.579 --rc genhtml_branch_coverage=1 00:06:31.579 --rc genhtml_function_coverage=1 00:06:31.579 --rc genhtml_legend=1 00:06:31.579 --rc geninfo_all_blocks=1 00:06:31.579 --rc geninfo_unexecuted_blocks=1 00:06:31.579 00:06:31.579 ' 00:06:31.579 15:14:17 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:31.579 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.579 --rc genhtml_branch_coverage=1 00:06:31.579 --rc genhtml_function_coverage=1 00:06:31.579 --rc genhtml_legend=1 00:06:31.579 --rc geninfo_all_blocks=1 00:06:31.579 --rc geninfo_unexecuted_blocks=1 00:06:31.579 00:06:31.579 ' 00:06:31.579 15:14:17 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:31.579 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.579 --rc genhtml_branch_coverage=1 00:06:31.579 --rc genhtml_function_coverage=1 00:06:31.579 --rc genhtml_legend=1 00:06:31.579 --rc geninfo_all_blocks=1 00:06:31.579 --rc geninfo_unexecuted_blocks=1 00:06:31.579 00:06:31.579 ' 00:06:31.579 15:14:17 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:31.579 15:14:17 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:31.579 15:14:17 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:31.579 15:14:17 thread -- common/autotest_common.sh@10 -- # set +x 00:06:31.579 ************************************ 00:06:31.579 START TEST thread_poller_perf 00:06:31.579 ************************************ 00:06:31.579 15:14:17 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:31.579 [2024-11-20 15:14:17.951444] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:06:31.579 [2024-11-20 15:14:17.951729] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59594 ] 00:06:31.838 [2024-11-20 15:14:18.129779] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.838 [2024-11-20 15:14:18.242592] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.838 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:33.216 [2024-11-20T15:14:19.698Z] ====================================== 00:06:33.216 [2024-11-20T15:14:19.698Z] busy:2501814050 (cyc) 00:06:33.216 [2024-11-20T15:14:19.698Z] total_run_count: 382000 00:06:33.216 [2024-11-20T15:14:19.698Z] tsc_hz: 2490000000 (cyc) 00:06:33.216 [2024-11-20T15:14:19.698Z] ====================================== 00:06:33.216 [2024-11-20T15:14:19.698Z] poller_cost: 6549 (cyc), 2630 (nsec) 00:06:33.216 00:06:33.216 real 0m1.591s 00:06:33.216 user 0m1.378s 00:06:33.216 sys 0m0.104s 00:06:33.216 15:14:19 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:33.216 ************************************ 00:06:33.216 END TEST thread_poller_perf 00:06:33.216 ************************************ 00:06:33.216 15:14:19 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:33.216 15:14:19 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:33.216 15:14:19 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:33.216 15:14:19 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:33.216 15:14:19 thread -- common/autotest_common.sh@10 -- # set +x 00:06:33.216 ************************************ 00:06:33.216 START TEST thread_poller_perf 00:06:33.216 ************************************ 00:06:33.216 15:14:19 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:33.216 [2024-11-20 15:14:19.621315] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:06:33.216 [2024-11-20 15:14:19.621692] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59630 ] 00:06:33.475 [2024-11-20 15:14:19.803825] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.475 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:33.476 [2024-11-20 15:14:19.933464] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.872 [2024-11-20T15:14:21.354Z] ====================================== 00:06:34.872 [2024-11-20T15:14:21.354Z] busy:2493993186 (cyc) 00:06:34.872 [2024-11-20T15:14:21.354Z] total_run_count: 4701000 00:06:34.872 [2024-11-20T15:14:21.354Z] tsc_hz: 2490000000 (cyc) 00:06:34.872 [2024-11-20T15:14:21.354Z] ====================================== 00:06:34.872 [2024-11-20T15:14:21.354Z] poller_cost: 530 (cyc), 212 (nsec) 00:06:34.872 ************************************ 00:06:34.872 END TEST thread_poller_perf 00:06:34.872 ************************************ 00:06:34.872 00:06:34.872 real 0m1.602s 00:06:34.872 user 0m1.376s 00:06:34.872 sys 0m0.118s 00:06:34.872 15:14:21 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:34.872 15:14:21 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:34.872 15:14:21 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:34.872 00:06:34.872 real 0m3.570s 00:06:34.872 user 0m2.931s 00:06:34.872 sys 0m0.429s 00:06:34.872 ************************************ 00:06:34.872 END TEST thread 00:06:34.872 ************************************ 00:06:34.872 15:14:21 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:34.872 15:14:21 thread -- common/autotest_common.sh@10 -- # set +x 00:06:34.872 15:14:21 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:34.872 15:14:21 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:34.872 15:14:21 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:34.872 15:14:21 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:34.872 15:14:21 -- common/autotest_common.sh@10 -- # set +x 00:06:34.872 ************************************ 00:06:34.872 START TEST app_cmdline 00:06:34.872 ************************************ 00:06:34.872 15:14:21 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:35.132 * Looking for test storage... 00:06:35.132 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:35.132 15:14:21 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:35.132 15:14:21 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:06:35.132 15:14:21 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:35.132 15:14:21 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:35.132 15:14:21 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:35.132 15:14:21 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:35.132 15:14:21 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:35.132 15:14:21 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:35.132 15:14:21 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:35.132 15:14:21 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:35.132 15:14:21 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:35.132 15:14:21 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:35.132 15:14:21 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:35.132 15:14:21 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:35.132 15:14:21 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:35.132 15:14:21 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:35.132 15:14:21 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:35.132 15:14:21 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:35.132 15:14:21 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:35.132 15:14:21 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:35.132 15:14:21 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:35.132 15:14:21 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:35.132 15:14:21 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:35.132 15:14:21 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:35.132 15:14:21 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:35.132 15:14:21 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:35.132 15:14:21 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:35.132 15:14:21 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:35.132 15:14:21 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:35.132 15:14:21 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:35.132 15:14:21 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:35.132 15:14:21 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:35.132 15:14:21 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:35.132 15:14:21 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:35.132 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:35.132 --rc genhtml_branch_coverage=1 00:06:35.132 --rc genhtml_function_coverage=1 00:06:35.132 --rc genhtml_legend=1 00:06:35.132 --rc geninfo_all_blocks=1 00:06:35.132 --rc geninfo_unexecuted_blocks=1 00:06:35.132 00:06:35.132 ' 00:06:35.132 15:14:21 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:35.132 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:35.132 --rc genhtml_branch_coverage=1 00:06:35.132 --rc genhtml_function_coverage=1 00:06:35.132 --rc genhtml_legend=1 00:06:35.132 --rc geninfo_all_blocks=1 00:06:35.132 --rc geninfo_unexecuted_blocks=1 00:06:35.132 00:06:35.132 ' 00:06:35.132 15:14:21 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:35.132 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:35.132 --rc genhtml_branch_coverage=1 00:06:35.132 --rc genhtml_function_coverage=1 00:06:35.132 --rc genhtml_legend=1 00:06:35.132 --rc geninfo_all_blocks=1 00:06:35.132 --rc geninfo_unexecuted_blocks=1 00:06:35.132 00:06:35.132 ' 00:06:35.132 15:14:21 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:35.132 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:35.132 --rc genhtml_branch_coverage=1 00:06:35.132 --rc genhtml_function_coverage=1 00:06:35.132 --rc genhtml_legend=1 00:06:35.132 --rc geninfo_all_blocks=1 00:06:35.132 --rc geninfo_unexecuted_blocks=1 00:06:35.132 00:06:35.132 ' 00:06:35.132 15:14:21 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:35.132 15:14:21 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=59719 00:06:35.132 15:14:21 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:35.132 15:14:21 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 59719 00:06:35.132 15:14:21 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 59719 ']' 00:06:35.132 15:14:21 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:35.132 15:14:21 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:35.132 15:14:21 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:35.132 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:35.132 15:14:21 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:35.132 15:14:21 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:35.391 [2024-11-20 15:14:21.629478] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:06:35.391 [2024-11-20 15:14:21.629814] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59719 ] 00:06:35.391 [2024-11-20 15:14:21.797958] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.650 [2024-11-20 15:14:21.950197] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.588 15:14:22 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:36.588 15:14:22 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:06:36.588 15:14:22 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:06:36.846 { 00:06:36.846 "version": "SPDK v25.01-pre git sha1 1981e6eec", 00:06:36.846 "fields": { 00:06:36.846 "major": 25, 00:06:36.846 "minor": 1, 00:06:36.846 "patch": 0, 00:06:36.846 "suffix": "-pre", 00:06:36.846 "commit": "1981e6eec" 00:06:36.846 } 00:06:36.846 } 00:06:36.846 15:14:23 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:36.846 15:14:23 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:36.846 15:14:23 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:36.846 15:14:23 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:36.846 15:14:23 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:36.846 15:14:23 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:36.846 15:14:23 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:36.846 15:14:23 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:36.846 15:14:23 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:36.846 15:14:23 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:36.846 15:14:23 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:36.846 15:14:23 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:36.846 15:14:23 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:36.846 15:14:23 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:06:36.846 15:14:23 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:36.846 15:14:23 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:36.846 15:14:23 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:36.846 15:14:23 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:36.846 15:14:23 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:36.846 15:14:23 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:36.846 15:14:23 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:36.846 15:14:23 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:36.846 15:14:23 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:06:36.846 15:14:23 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:37.105 request: 00:06:37.105 { 00:06:37.105 "method": "env_dpdk_get_mem_stats", 00:06:37.105 "req_id": 1 00:06:37.105 } 00:06:37.105 Got JSON-RPC error response 00:06:37.105 response: 00:06:37.105 { 00:06:37.105 "code": -32601, 00:06:37.105 "message": "Method not found" 00:06:37.105 } 00:06:37.105 15:14:23 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:06:37.105 15:14:23 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:37.105 15:14:23 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:37.105 15:14:23 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:37.105 15:14:23 app_cmdline -- app/cmdline.sh@1 -- # killprocess 59719 00:06:37.105 15:14:23 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 59719 ']' 00:06:37.105 15:14:23 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 59719 00:06:37.105 15:14:23 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:06:37.105 15:14:23 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:37.105 15:14:23 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59719 00:06:37.105 killing process with pid 59719 00:06:37.105 15:14:23 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:37.105 15:14:23 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:37.105 15:14:23 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59719' 00:06:37.105 15:14:23 app_cmdline -- common/autotest_common.sh@973 -- # kill 59719 00:06:37.105 15:14:23 app_cmdline -- common/autotest_common.sh@978 -- # wait 59719 00:06:39.636 ************************************ 00:06:39.636 END TEST app_cmdline 00:06:39.636 ************************************ 00:06:39.636 00:06:39.636 real 0m4.637s 00:06:39.636 user 0m4.862s 00:06:39.636 sys 0m0.688s 00:06:39.636 15:14:25 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:39.636 15:14:25 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:39.636 15:14:25 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:39.636 15:14:25 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:39.636 15:14:25 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:39.636 15:14:25 -- common/autotest_common.sh@10 -- # set +x 00:06:39.636 ************************************ 00:06:39.636 START TEST version 00:06:39.636 ************************************ 00:06:39.636 15:14:25 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:39.636 * Looking for test storage... 00:06:39.636 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:39.636 15:14:26 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:39.636 15:14:26 version -- common/autotest_common.sh@1693 -- # lcov --version 00:06:39.636 15:14:26 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:39.895 15:14:26 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:39.895 15:14:26 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:39.895 15:14:26 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:39.895 15:14:26 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:39.895 15:14:26 version -- scripts/common.sh@336 -- # IFS=.-: 00:06:39.895 15:14:26 version -- scripts/common.sh@336 -- # read -ra ver1 00:06:39.895 15:14:26 version -- scripts/common.sh@337 -- # IFS=.-: 00:06:39.895 15:14:26 version -- scripts/common.sh@337 -- # read -ra ver2 00:06:39.895 15:14:26 version -- scripts/common.sh@338 -- # local 'op=<' 00:06:39.895 15:14:26 version -- scripts/common.sh@340 -- # ver1_l=2 00:06:39.895 15:14:26 version -- scripts/common.sh@341 -- # ver2_l=1 00:06:39.895 15:14:26 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:39.895 15:14:26 version -- scripts/common.sh@344 -- # case "$op" in 00:06:39.895 15:14:26 version -- scripts/common.sh@345 -- # : 1 00:06:39.895 15:14:26 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:39.895 15:14:26 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:39.895 15:14:26 version -- scripts/common.sh@365 -- # decimal 1 00:06:39.895 15:14:26 version -- scripts/common.sh@353 -- # local d=1 00:06:39.895 15:14:26 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:39.895 15:14:26 version -- scripts/common.sh@355 -- # echo 1 00:06:39.895 15:14:26 version -- scripts/common.sh@365 -- # ver1[v]=1 00:06:39.895 15:14:26 version -- scripts/common.sh@366 -- # decimal 2 00:06:39.895 15:14:26 version -- scripts/common.sh@353 -- # local d=2 00:06:39.895 15:14:26 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:39.895 15:14:26 version -- scripts/common.sh@355 -- # echo 2 00:06:39.895 15:14:26 version -- scripts/common.sh@366 -- # ver2[v]=2 00:06:39.895 15:14:26 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:39.895 15:14:26 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:39.895 15:14:26 version -- scripts/common.sh@368 -- # return 0 00:06:39.895 15:14:26 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:39.895 15:14:26 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:39.895 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:39.895 --rc genhtml_branch_coverage=1 00:06:39.895 --rc genhtml_function_coverage=1 00:06:39.895 --rc genhtml_legend=1 00:06:39.895 --rc geninfo_all_blocks=1 00:06:39.895 --rc geninfo_unexecuted_blocks=1 00:06:39.895 00:06:39.895 ' 00:06:39.895 15:14:26 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:39.895 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:39.895 --rc genhtml_branch_coverage=1 00:06:39.895 --rc genhtml_function_coverage=1 00:06:39.895 --rc genhtml_legend=1 00:06:39.895 --rc geninfo_all_blocks=1 00:06:39.895 --rc geninfo_unexecuted_blocks=1 00:06:39.895 00:06:39.895 ' 00:06:39.895 15:14:26 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:39.895 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:39.895 --rc genhtml_branch_coverage=1 00:06:39.895 --rc genhtml_function_coverage=1 00:06:39.895 --rc genhtml_legend=1 00:06:39.895 --rc geninfo_all_blocks=1 00:06:39.895 --rc geninfo_unexecuted_blocks=1 00:06:39.895 00:06:39.895 ' 00:06:39.895 15:14:26 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:39.895 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:39.895 --rc genhtml_branch_coverage=1 00:06:39.895 --rc genhtml_function_coverage=1 00:06:39.895 --rc genhtml_legend=1 00:06:39.895 --rc geninfo_all_blocks=1 00:06:39.895 --rc geninfo_unexecuted_blocks=1 00:06:39.895 00:06:39.895 ' 00:06:39.895 15:14:26 version -- app/version.sh@17 -- # get_header_version major 00:06:39.895 15:14:26 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:39.895 15:14:26 version -- app/version.sh@14 -- # cut -f2 00:06:39.895 15:14:26 version -- app/version.sh@14 -- # tr -d '"' 00:06:39.895 15:14:26 version -- app/version.sh@17 -- # major=25 00:06:39.895 15:14:26 version -- app/version.sh@18 -- # get_header_version minor 00:06:39.895 15:14:26 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:39.895 15:14:26 version -- app/version.sh@14 -- # tr -d '"' 00:06:39.895 15:14:26 version -- app/version.sh@14 -- # cut -f2 00:06:39.895 15:14:26 version -- app/version.sh@18 -- # minor=1 00:06:39.895 15:14:26 version -- app/version.sh@19 -- # get_header_version patch 00:06:39.895 15:14:26 version -- app/version.sh@14 -- # cut -f2 00:06:39.895 15:14:26 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:39.895 15:14:26 version -- app/version.sh@14 -- # tr -d '"' 00:06:39.895 15:14:26 version -- app/version.sh@19 -- # patch=0 00:06:39.895 15:14:26 version -- app/version.sh@20 -- # get_header_version suffix 00:06:39.895 15:14:26 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:39.895 15:14:26 version -- app/version.sh@14 -- # cut -f2 00:06:39.895 15:14:26 version -- app/version.sh@14 -- # tr -d '"' 00:06:39.895 15:14:26 version -- app/version.sh@20 -- # suffix=-pre 00:06:39.895 15:14:26 version -- app/version.sh@22 -- # version=25.1 00:06:39.895 15:14:26 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:39.895 15:14:26 version -- app/version.sh@28 -- # version=25.1rc0 00:06:39.895 15:14:26 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:39.895 15:14:26 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:39.895 15:14:26 version -- app/version.sh@30 -- # py_version=25.1rc0 00:06:39.895 15:14:26 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:06:39.895 00:06:39.895 real 0m0.232s 00:06:39.895 user 0m0.161s 00:06:39.895 sys 0m0.107s 00:06:39.895 ************************************ 00:06:39.895 END TEST version 00:06:39.895 ************************************ 00:06:39.895 15:14:26 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:39.895 15:14:26 version -- common/autotest_common.sh@10 -- # set +x 00:06:39.895 15:14:26 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:06:39.895 15:14:26 -- spdk/autotest.sh@188 -- # [[ 1 -eq 1 ]] 00:06:39.895 15:14:26 -- spdk/autotest.sh@189 -- # run_test bdev_raid /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:06:39.895 15:14:26 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:39.895 15:14:26 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:39.895 15:14:26 -- common/autotest_common.sh@10 -- # set +x 00:06:39.895 ************************************ 00:06:39.895 START TEST bdev_raid 00:06:39.895 ************************************ 00:06:39.895 15:14:26 bdev_raid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:06:39.895 * Looking for test storage... 00:06:39.895 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:06:39.895 15:14:26 bdev_raid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:39.895 15:14:26 bdev_raid -- common/autotest_common.sh@1693 -- # lcov --version 00:06:39.895 15:14:26 bdev_raid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:40.154 15:14:26 bdev_raid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:40.154 15:14:26 bdev_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:40.154 15:14:26 bdev_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:40.154 15:14:26 bdev_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:40.154 15:14:26 bdev_raid -- scripts/common.sh@336 -- # IFS=.-: 00:06:40.154 15:14:26 bdev_raid -- scripts/common.sh@336 -- # read -ra ver1 00:06:40.154 15:14:26 bdev_raid -- scripts/common.sh@337 -- # IFS=.-: 00:06:40.154 15:14:26 bdev_raid -- scripts/common.sh@337 -- # read -ra ver2 00:06:40.154 15:14:26 bdev_raid -- scripts/common.sh@338 -- # local 'op=<' 00:06:40.154 15:14:26 bdev_raid -- scripts/common.sh@340 -- # ver1_l=2 00:06:40.154 15:14:26 bdev_raid -- scripts/common.sh@341 -- # ver2_l=1 00:06:40.154 15:14:26 bdev_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:40.154 15:14:26 bdev_raid -- scripts/common.sh@344 -- # case "$op" in 00:06:40.154 15:14:26 bdev_raid -- scripts/common.sh@345 -- # : 1 00:06:40.154 15:14:26 bdev_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:40.154 15:14:26 bdev_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:40.154 15:14:26 bdev_raid -- scripts/common.sh@365 -- # decimal 1 00:06:40.154 15:14:26 bdev_raid -- scripts/common.sh@353 -- # local d=1 00:06:40.154 15:14:26 bdev_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:40.154 15:14:26 bdev_raid -- scripts/common.sh@355 -- # echo 1 00:06:40.154 15:14:26 bdev_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:06:40.154 15:14:26 bdev_raid -- scripts/common.sh@366 -- # decimal 2 00:06:40.154 15:14:26 bdev_raid -- scripts/common.sh@353 -- # local d=2 00:06:40.154 15:14:26 bdev_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:40.154 15:14:26 bdev_raid -- scripts/common.sh@355 -- # echo 2 00:06:40.154 15:14:26 bdev_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:06:40.154 15:14:26 bdev_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:40.154 15:14:26 bdev_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:40.154 15:14:26 bdev_raid -- scripts/common.sh@368 -- # return 0 00:06:40.154 15:14:26 bdev_raid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:40.154 15:14:26 bdev_raid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:40.154 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.154 --rc genhtml_branch_coverage=1 00:06:40.154 --rc genhtml_function_coverage=1 00:06:40.154 --rc genhtml_legend=1 00:06:40.154 --rc geninfo_all_blocks=1 00:06:40.154 --rc geninfo_unexecuted_blocks=1 00:06:40.154 00:06:40.154 ' 00:06:40.154 15:14:26 bdev_raid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:40.154 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.154 --rc genhtml_branch_coverage=1 00:06:40.154 --rc genhtml_function_coverage=1 00:06:40.154 --rc genhtml_legend=1 00:06:40.154 --rc geninfo_all_blocks=1 00:06:40.154 --rc geninfo_unexecuted_blocks=1 00:06:40.154 00:06:40.154 ' 00:06:40.154 15:14:26 bdev_raid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:40.154 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.154 --rc genhtml_branch_coverage=1 00:06:40.154 --rc genhtml_function_coverage=1 00:06:40.154 --rc genhtml_legend=1 00:06:40.154 --rc geninfo_all_blocks=1 00:06:40.154 --rc geninfo_unexecuted_blocks=1 00:06:40.154 00:06:40.154 ' 00:06:40.154 15:14:26 bdev_raid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:40.154 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.154 --rc genhtml_branch_coverage=1 00:06:40.154 --rc genhtml_function_coverage=1 00:06:40.154 --rc genhtml_legend=1 00:06:40.154 --rc geninfo_all_blocks=1 00:06:40.154 --rc geninfo_unexecuted_blocks=1 00:06:40.154 00:06:40.154 ' 00:06:40.154 15:14:26 bdev_raid -- bdev/bdev_raid.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:40.154 15:14:26 bdev_raid -- bdev/nbd_common.sh@6 -- # set -e 00:06:40.154 15:14:26 bdev_raid -- bdev/bdev_raid.sh@14 -- # rpc_py=rpc_cmd 00:06:40.154 15:14:26 bdev_raid -- bdev/bdev_raid.sh@946 -- # mkdir -p /raidtest 00:06:40.154 15:14:26 bdev_raid -- bdev/bdev_raid.sh@947 -- # trap 'cleanup; exit 1' EXIT 00:06:40.154 15:14:26 bdev_raid -- bdev/bdev_raid.sh@949 -- # base_blocklen=512 00:06:40.154 15:14:26 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid1_resize_data_offset_test raid_resize_data_offset_test 00:06:40.154 15:14:26 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:40.154 15:14:26 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:40.154 15:14:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:40.154 ************************************ 00:06:40.154 START TEST raid1_resize_data_offset_test 00:06:40.154 ************************************ 00:06:40.154 15:14:26 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1129 -- # raid_resize_data_offset_test 00:06:40.154 15:14:26 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@917 -- # raid_pid=59907 00:06:40.154 15:14:26 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@918 -- # echo 'Process raid pid: 59907' 00:06:40.154 15:14:26 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@916 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:40.154 Process raid pid: 59907 00:06:40.154 15:14:26 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@919 -- # waitforlisten 59907 00:06:40.154 15:14:26 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@835 -- # '[' -z 59907 ']' 00:06:40.154 15:14:26 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:40.154 15:14:26 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:40.154 15:14:26 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:40.154 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:40.154 15:14:26 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:40.154 15:14:26 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:40.154 [2024-11-20 15:14:26.525185] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:06:40.154 [2024-11-20 15:14:26.525319] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:40.413 [2024-11-20 15:14:26.705192] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.413 [2024-11-20 15:14:26.822971] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.671 [2024-11-20 15:14:27.021144] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:40.671 [2024-11-20 15:14:27.021186] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:40.929 15:14:27 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:40.929 15:14:27 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@868 -- # return 0 00:06:40.929 15:14:27 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@922 -- # rpc_cmd bdev_malloc_create -b malloc0 64 512 -o 16 00:06:40.929 15:14:27 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:40.929 15:14:27 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:41.188 malloc0 00:06:41.188 15:14:27 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:41.188 15:14:27 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@923 -- # rpc_cmd bdev_malloc_create -b malloc1 64 512 -o 16 00:06:41.188 15:14:27 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:41.188 15:14:27 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:41.188 malloc1 00:06:41.188 15:14:27 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:41.188 15:14:27 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@924 -- # rpc_cmd bdev_null_create null0 64 512 00:06:41.188 15:14:27 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:41.188 15:14:27 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:41.188 null0 00:06:41.188 15:14:27 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:41.188 15:14:27 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@926 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''malloc0 malloc1 null0'\''' -s 00:06:41.188 15:14:27 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:41.188 15:14:27 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:41.188 [2024-11-20 15:14:27.547396] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc0 is claimed 00:06:41.188 [2024-11-20 15:14:27.549675] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:06:41.188 [2024-11-20 15:14:27.549729] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev null0 is claimed 00:06:41.188 [2024-11-20 15:14:27.549927] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:06:41.188 [2024-11-20 15:14:27.549942] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 129024, blocklen 512 00:06:41.188 [2024-11-20 15:14:27.550231] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:06:41.188 [2024-11-20 15:14:27.550402] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:06:41.188 [2024-11-20 15:14:27.550416] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:06:41.188 [2024-11-20 15:14:27.550571] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:41.188 15:14:27 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:41.188 15:14:27 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:41.188 15:14:27 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:06:41.188 15:14:27 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:41.188 15:14:27 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:41.188 15:14:27 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:41.188 15:14:27 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # (( 2048 == 2048 )) 00:06:41.188 15:14:27 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@931 -- # rpc_cmd bdev_null_delete null0 00:06:41.188 15:14:27 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:41.188 15:14:27 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:41.188 [2024-11-20 15:14:27.595302] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: null0 00:06:41.188 15:14:27 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:41.188 15:14:27 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@935 -- # rpc_cmd bdev_malloc_create -b malloc2 512 512 -o 30 00:06:41.188 15:14:27 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:41.188 15:14:27 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:41.754 malloc2 00:06:41.754 15:14:28 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:41.754 15:14:28 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@936 -- # rpc_cmd bdev_raid_add_base_bdev Raid malloc2 00:06:41.754 15:14:28 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:41.754 15:14:28 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:41.755 [2024-11-20 15:14:28.219340] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:06:42.013 [2024-11-20 15:14:28.236782] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:06:42.013 15:14:28 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:42.013 [2024-11-20 15:14:28.238808] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev Raid 00:06:42.013 15:14:28 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:06:42.013 15:14:28 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:42.013 15:14:28 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:42.013 15:14:28 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:42.013 15:14:28 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:42.013 15:14:28 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # (( 2070 == 2070 )) 00:06:42.013 15:14:28 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@941 -- # killprocess 59907 00:06:42.013 15:14:28 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@954 -- # '[' -z 59907 ']' 00:06:42.014 15:14:28 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@958 -- # kill -0 59907 00:06:42.014 15:14:28 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@959 -- # uname 00:06:42.014 15:14:28 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:42.014 15:14:28 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59907 00:06:42.014 killing process with pid 59907 00:06:42.014 15:14:28 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:42.014 15:14:28 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:42.014 15:14:28 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59907' 00:06:42.014 15:14:28 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@973 -- # kill 59907 00:06:42.014 [2024-11-20 15:14:28.326253] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:42.014 15:14:28 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@978 -- # wait 59907 00:06:42.014 [2024-11-20 15:14:28.326521] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev Raid: Operation canceled 00:06:42.014 [2024-11-20 15:14:28.326586] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:42.014 [2024-11-20 15:14:28.326605] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: malloc2 00:06:42.014 [2024-11-20 15:14:28.363841] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:42.014 [2024-11-20 15:14:28.364311] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:42.014 [2024-11-20 15:14:28.364340] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:06:43.918 [2024-11-20 15:14:30.216049] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:45.296 15:14:31 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@943 -- # return 0 00:06:45.296 00:06:45.296 real 0m4.973s 00:06:45.296 user 0m4.841s 00:06:45.296 sys 0m0.573s 00:06:45.296 15:14:31 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:45.296 ************************************ 00:06:45.296 END TEST raid1_resize_data_offset_test 00:06:45.296 ************************************ 00:06:45.296 15:14:31 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:45.296 15:14:31 bdev_raid -- bdev/bdev_raid.sh@953 -- # run_test raid0_resize_superblock_test raid_resize_superblock_test 0 00:06:45.296 15:14:31 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:45.296 15:14:31 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:45.296 15:14:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:45.296 ************************************ 00:06:45.296 START TEST raid0_resize_superblock_test 00:06:45.296 ************************************ 00:06:45.296 15:14:31 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1129 -- # raid_resize_superblock_test 0 00:06:45.296 15:14:31 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=0 00:06:45.296 15:14:31 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=59996 00:06:45.296 15:14:31 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:45.296 15:14:31 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 59996' 00:06:45.296 Process raid pid: 59996 00:06:45.296 15:14:31 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 59996 00:06:45.296 15:14:31 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 59996 ']' 00:06:45.296 15:14:31 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:45.296 15:14:31 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:45.296 15:14:31 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:45.296 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:45.296 15:14:31 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:45.296 15:14:31 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:45.296 [2024-11-20 15:14:31.579698] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:06:45.296 [2024-11-20 15:14:31.579843] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:45.296 [2024-11-20 15:14:31.768627] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.554 [2024-11-20 15:14:31.910575] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.813 [2024-11-20 15:14:32.155075] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:45.813 [2024-11-20 15:14:32.155261] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:46.072 15:14:32 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:46.072 15:14:32 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:06:46.072 15:14:32 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:06:46.072 15:14:32 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:46.072 15:14:32 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:46.641 malloc0 00:06:46.641 15:14:33 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:46.641 15:14:33 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:06:46.641 15:14:33 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:46.641 15:14:33 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:46.641 [2024-11-20 15:14:33.061667] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:06:46.641 [2024-11-20 15:14:33.061749] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:46.641 [2024-11-20 15:14:33.061776] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:06:46.641 [2024-11-20 15:14:33.061792] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:46.641 [2024-11-20 15:14:33.064416] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:46.641 [2024-11-20 15:14:33.064610] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:06:46.641 pt0 00:06:46.641 15:14:33 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:46.641 15:14:33 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:06:46.641 15:14:33 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:46.641 15:14:33 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:46.901 5f1225a6-245a-42d2-90e3-ccd1f9ebcef3 00:06:46.901 15:14:33 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:46.901 15:14:33 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:06:46.901 15:14:33 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:46.901 15:14:33 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:46.901 98e12ce1-a055-42bb-9102-7a9da2a984eb 00:06:46.901 15:14:33 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:46.901 15:14:33 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:06:46.901 15:14:33 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:46.901 15:14:33 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:46.901 c96e405e-0eb7-4232-b437-3646c855d2c2 00:06:46.901 15:14:33 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:46.901 15:14:33 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:06:46.901 15:14:33 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@870 -- # rpc_cmd bdev_raid_create -n Raid -r 0 -z 64 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:06:46.901 15:14:33 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:46.901 15:14:33 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:46.901 [2024-11-20 15:14:33.205132] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 98e12ce1-a055-42bb-9102-7a9da2a984eb is claimed 00:06:46.901 [2024-11-20 15:14:33.205275] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev c96e405e-0eb7-4232-b437-3646c855d2c2 is claimed 00:06:46.901 [2024-11-20 15:14:33.205428] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:06:46.901 [2024-11-20 15:14:33.205448] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 245760, blocklen 512 00:06:46.901 [2024-11-20 15:14:33.205831] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:06:46.901 [2024-11-20 15:14:33.206071] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:06:46.901 [2024-11-20 15:14:33.206084] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:06:46.901 [2024-11-20 15:14:33.206271] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:46.901 15:14:33 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:46.901 15:14:33 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:06:46.901 15:14:33 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:06:46.901 15:14:33 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:46.901 15:14:33 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:46.901 15:14:33 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:46.901 15:14:33 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:06:46.901 15:14:33 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:06:46.901 15:14:33 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:06:46.901 15:14:33 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:46.901 15:14:33 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:46.901 15:14:33 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:46.901 15:14:33 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:06:46.901 15:14:33 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:46.901 15:14:33 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:46.901 15:14:33 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:46.901 15:14:33 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # jq '.[].num_blocks' 00:06:46.901 15:14:33 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:46.901 15:14:33 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:46.901 [2024-11-20 15:14:33.305207] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:46.901 15:14:33 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:46.901 15:14:33 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:46.901 15:14:33 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:46.901 15:14:33 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # (( 245760 == 245760 )) 00:06:46.901 15:14:33 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:06:46.901 15:14:33 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:46.901 15:14:33 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:46.901 [2024-11-20 15:14:33.341140] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:46.901 [2024-11-20 15:14:33.341300] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '98e12ce1-a055-42bb-9102-7a9da2a984eb' was resized: old size 131072, new size 204800 00:06:46.901 15:14:33 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:46.901 15:14:33 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:06:46.901 15:14:33 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:46.901 15:14:33 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:46.901 [2024-11-20 15:14:33.349078] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:46.901 [2024-11-20 15:14:33.349107] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'c96e405e-0eb7-4232-b437-3646c855d2c2' was resized: old size 131072, new size 204800 00:06:46.901 [2024-11-20 15:14:33.349144] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 245760 to 393216 00:06:46.901 15:14:33 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:46.901 15:14:33 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:06:46.901 15:14:33 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:06:46.901 15:14:33 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:46.901 15:14:33 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:47.162 15:14:33 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:47.162 15:14:33 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:06:47.162 15:14:33 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:06:47.162 15:14:33 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:06:47.162 15:14:33 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:47.162 15:14:33 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:47.162 15:14:33 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:47.162 15:14:33 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:06:47.162 15:14:33 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:47.162 15:14:33 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # jq '.[].num_blocks' 00:06:47.162 15:14:33 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:47.162 15:14:33 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:47.162 15:14:33 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:47.162 15:14:33 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:47.162 [2024-11-20 15:14:33.449088] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:47.162 15:14:33 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:47.162 15:14:33 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:47.162 15:14:33 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:47.162 15:14:33 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # (( 393216 == 393216 )) 00:06:47.162 15:14:33 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:06:47.162 15:14:33 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:47.162 15:14:33 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:47.162 [2024-11-20 15:14:33.484854] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:06:47.162 [2024-11-20 15:14:33.485085] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:06:47.162 [2024-11-20 15:14:33.485113] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:47.162 [2024-11-20 15:14:33.485130] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:06:47.162 [2024-11-20 15:14:33.485261] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:47.162 [2024-11-20 15:14:33.485299] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:47.162 [2024-11-20 15:14:33.485317] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:06:47.162 15:14:33 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:47.162 15:14:33 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:06:47.162 15:14:33 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:47.162 15:14:33 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:47.162 [2024-11-20 15:14:33.492724] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:06:47.162 [2024-11-20 15:14:33.492781] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:47.162 [2024-11-20 15:14:33.492803] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:06:47.162 [2024-11-20 15:14:33.492819] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:47.162 [2024-11-20 15:14:33.495399] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:47.162 [2024-11-20 15:14:33.495454] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:06:47.162 [2024-11-20 15:14:33.497319] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 98e12ce1-a055-42bb-9102-7a9da2a984eb 00:06:47.162 [2024-11-20 15:14:33.497392] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 98e12ce1-a055-42bb-9102-7a9da2a984eb is claimed 00:06:47.163 [2024-11-20 15:14:33.497505] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev c96e405e-0eb7-4232-b437-3646c855d2c2 00:06:47.163 [2024-11-20 15:14:33.497526] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev c96e405e-0eb7-4232-b437-3646c855d2c2 is claimed 00:06:47.163 [2024-11-20 15:14:33.497721] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev c96e405e-0eb7-4232-b437-3646c855d2c2 (2) smaller than existing raid bdev Raid (3) 00:06:47.163 [2024-11-20 15:14:33.497756] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev 98e12ce1-a055-42bb-9102-7a9da2a984eb: File exists 00:06:47.163 [2024-11-20 15:14:33.497797] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:06:47.163 pt0 00:06:47.163 [2024-11-20 15:14:33.497812] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 393216, blocklen 512 00:06:47.163 [2024-11-20 15:14:33.498105] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:06:47.163 [2024-11-20 15:14:33.498257] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:06:47.163 [2024-11-20 15:14:33.498273] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:06:47.163 [2024-11-20 15:14:33.498427] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:47.163 15:14:33 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:47.163 15:14:33 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:06:47.163 15:14:33 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:47.163 15:14:33 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:47.163 15:14:33 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:47.163 15:14:33 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:47.163 15:14:33 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:47.163 15:14:33 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:47.163 15:14:33 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:47.163 15:14:33 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:47.163 15:14:33 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # jq '.[].num_blocks' 00:06:47.163 [2024-11-20 15:14:33.517658] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:47.163 15:14:33 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:47.163 15:14:33 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:47.163 15:14:33 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:47.163 15:14:33 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # (( 393216 == 393216 )) 00:06:47.163 15:14:33 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 59996 00:06:47.163 15:14:33 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 59996 ']' 00:06:47.163 15:14:33 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@958 -- # kill -0 59996 00:06:47.163 15:14:33 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@959 -- # uname 00:06:47.163 15:14:33 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:47.163 15:14:33 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59996 00:06:47.163 killing process with pid 59996 00:06:47.163 15:14:33 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:47.163 15:14:33 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:47.163 15:14:33 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59996' 00:06:47.163 15:14:33 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@973 -- # kill 59996 00:06:47.163 15:14:33 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@978 -- # wait 59996 00:06:47.163 [2024-11-20 15:14:33.591907] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:47.163 [2024-11-20 15:14:33.591999] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:47.163 [2024-11-20 15:14:33.592050] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:47.163 [2024-11-20 15:14:33.592061] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:06:49.131 [2024-11-20 15:14:35.146245] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:50.067 15:14:36 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:06:50.067 00:06:50.067 real 0m4.867s 00:06:50.067 user 0m5.024s 00:06:50.067 sys 0m0.655s 00:06:50.067 ************************************ 00:06:50.067 END TEST raid0_resize_superblock_test 00:06:50.067 ************************************ 00:06:50.067 15:14:36 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:50.067 15:14:36 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:50.067 15:14:36 bdev_raid -- bdev/bdev_raid.sh@954 -- # run_test raid1_resize_superblock_test raid_resize_superblock_test 1 00:06:50.067 15:14:36 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:50.067 15:14:36 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:50.067 15:14:36 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:50.067 ************************************ 00:06:50.067 START TEST raid1_resize_superblock_test 00:06:50.067 ************************************ 00:06:50.067 15:14:36 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1129 -- # raid_resize_superblock_test 1 00:06:50.067 15:14:36 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=1 00:06:50.067 15:14:36 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=60099 00:06:50.067 15:14:36 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:50.067 Process raid pid: 60099 00:06:50.067 15:14:36 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 60099' 00:06:50.067 15:14:36 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 60099 00:06:50.067 15:14:36 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 60099 ']' 00:06:50.067 15:14:36 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:50.067 15:14:36 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:50.067 15:14:36 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:50.067 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:50.067 15:14:36 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:50.067 15:14:36 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:50.067 [2024-11-20 15:14:36.518089] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:06:50.067 [2024-11-20 15:14:36.518418] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:50.326 [2024-11-20 15:14:36.705134] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.584 [2024-11-20 15:14:36.829776] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.584 [2024-11-20 15:14:37.058276] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:50.584 [2024-11-20 15:14:37.058325] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:51.153 15:14:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:51.153 15:14:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:06:51.153 15:14:37 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:06:51.153 15:14:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:51.153 15:14:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:51.720 malloc0 00:06:51.720 15:14:38 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:51.720 15:14:38 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:06:51.720 15:14:38 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:51.720 15:14:38 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:51.720 [2024-11-20 15:14:38.060316] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:06:51.720 [2024-11-20 15:14:38.060388] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:51.720 [2024-11-20 15:14:38.060415] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:06:51.720 [2024-11-20 15:14:38.060431] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:51.720 [2024-11-20 15:14:38.062985] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:51.720 [2024-11-20 15:14:38.063029] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:06:51.720 pt0 00:06:51.720 15:14:38 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:51.720 15:14:38 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:06:51.720 15:14:38 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:51.720 15:14:38 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:51.720 b8e890cd-96df-4b38-9d1b-10e55e36df48 00:06:51.720 15:14:38 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:51.720 15:14:38 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:06:51.720 15:14:38 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:51.720 15:14:38 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:51.720 0d8b181d-4645-4277-bf7b-85be018e98b1 00:06:51.720 15:14:38 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:51.720 15:14:38 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:06:51.720 15:14:38 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:51.720 15:14:38 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:51.720 bd26780b-1ef7-4df0-b49e-5e9e41dbd0e2 00:06:51.720 15:14:38 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:51.720 15:14:38 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:06:51.720 15:14:38 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@871 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:06:51.720 15:14:38 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:51.720 15:14:38 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:51.720 [2024-11-20 15:14:38.189395] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 0d8b181d-4645-4277-bf7b-85be018e98b1 is claimed 00:06:51.720 [2024-11-20 15:14:38.189491] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev bd26780b-1ef7-4df0-b49e-5e9e41dbd0e2 is claimed 00:06:51.720 [2024-11-20 15:14:38.189639] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:06:51.720 [2024-11-20 15:14:38.189658] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 122880, blocklen 512 00:06:51.720 [2024-11-20 15:14:38.189971] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:06:51.720 [2024-11-20 15:14:38.190173] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:06:51.720 [2024-11-20 15:14:38.190186] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:06:51.720 [2024-11-20 15:14:38.190363] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:51.720 15:14:38 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:51.720 15:14:38 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:06:51.720 15:14:38 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:06:51.720 15:14:38 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:51.720 15:14:38 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:51.979 15:14:38 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:51.979 15:14:38 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:06:51.979 15:14:38 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:06:51.979 15:14:38 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:51.979 15:14:38 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:51.979 15:14:38 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:06:51.979 15:14:38 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:51.979 15:14:38 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:06:51.979 15:14:38 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:51.979 15:14:38 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:51.979 15:14:38 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:51.979 15:14:38 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # jq '.[].num_blocks' 00:06:51.979 15:14:38 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:51.979 15:14:38 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:51.979 [2024-11-20 15:14:38.289529] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:51.979 15:14:38 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:51.979 15:14:38 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:51.979 15:14:38 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:51.979 15:14:38 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # (( 122880 == 122880 )) 00:06:51.979 15:14:38 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:06:51.979 15:14:38 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:51.979 15:14:38 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:51.979 [2024-11-20 15:14:38.325428] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:51.979 [2024-11-20 15:14:38.325568] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '0d8b181d-4645-4277-bf7b-85be018e98b1' was resized: old size 131072, new size 204800 00:06:51.979 15:14:38 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:51.979 15:14:38 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:06:51.979 15:14:38 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:51.979 15:14:38 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:51.979 [2024-11-20 15:14:38.337329] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:51.979 [2024-11-20 15:14:38.337357] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'bd26780b-1ef7-4df0-b49e-5e9e41dbd0e2' was resized: old size 131072, new size 204800 00:06:51.979 [2024-11-20 15:14:38.337395] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 122880 to 196608 00:06:51.979 15:14:38 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:51.979 15:14:38 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:06:51.979 15:14:38 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:06:51.980 15:14:38 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:51.980 15:14:38 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:51.980 15:14:38 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:51.980 15:14:38 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:06:51.980 15:14:38 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:06:51.980 15:14:38 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:51.980 15:14:38 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:51.980 15:14:38 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:06:51.980 15:14:38 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:51.980 15:14:38 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:06:51.980 15:14:38 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:51.980 15:14:38 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:51.980 15:14:38 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:51.980 15:14:38 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:51.980 15:14:38 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # jq '.[].num_blocks' 00:06:51.980 15:14:38 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:51.980 [2024-11-20 15:14:38.437314] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:51.980 15:14:38 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:52.239 15:14:38 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:52.239 15:14:38 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:52.239 15:14:38 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # (( 196608 == 196608 )) 00:06:52.239 15:14:38 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:06:52.239 15:14:38 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:52.239 15:14:38 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:52.239 [2024-11-20 15:14:38.477044] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:06:52.239 [2024-11-20 15:14:38.477133] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:06:52.239 [2024-11-20 15:14:38.477165] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:06:52.239 [2024-11-20 15:14:38.477318] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:52.239 [2024-11-20 15:14:38.477513] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:52.239 [2024-11-20 15:14:38.477586] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:52.239 [2024-11-20 15:14:38.477603] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:06:52.239 15:14:38 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:52.239 15:14:38 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:06:52.239 15:14:38 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:52.239 15:14:38 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:52.239 [2024-11-20 15:14:38.488957] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:06:52.239 [2024-11-20 15:14:38.489133] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:52.239 [2024-11-20 15:14:38.489165] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:06:52.239 [2024-11-20 15:14:38.489183] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:52.239 [2024-11-20 15:14:38.491804] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:52.239 [2024-11-20 15:14:38.491848] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:06:52.239 [2024-11-20 15:14:38.493606] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 0d8b181d-4645-4277-bf7b-85be018e98b1 00:06:52.239 [2024-11-20 15:14:38.493709] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 0d8b181d-4645-4277-bf7b-85be018e98b1 is claimed 00:06:52.239 [2024-11-20 15:14:38.493850] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev bd26780b-1ef7-4df0-b49e-5e9e41dbd0e2 00:06:52.239 [2024-11-20 15:14:38.493873] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev bd26780b-1ef7-4df0-b49e-5e9e41dbd0e2 is claimed 00:06:52.239 [2024-11-20 15:14:38.494048] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev bd26780b-1ef7-4df0-b49e-5e9e41dbd0e2 (2) smaller than existing raid bdev Raid (3) 00:06:52.239 [2024-11-20 15:14:38.494086] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev 0d8b181d-4645-4277-bf7b-85be018e98b1: File exists 00:06:52.239 [2024-11-20 15:14:38.494128] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:06:52.239 [2024-11-20 15:14:38.494143] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:06:52.239 [2024-11-20 15:14:38.494413] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:06:52.239 [2024-11-20 15:14:38.494563] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:06:52.239 [2024-11-20 15:14:38.494573] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:06:52.239 [2024-11-20 15:14:38.494750] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:52.239 pt0 00:06:52.239 15:14:38 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:52.239 15:14:38 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:06:52.239 15:14:38 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:52.239 15:14:38 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:52.239 15:14:38 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:52.239 15:14:38 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:52.239 15:14:38 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:52.239 15:14:38 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:52.239 15:14:38 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # jq '.[].num_blocks' 00:06:52.239 15:14:38 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:52.239 15:14:38 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:52.239 [2024-11-20 15:14:38.517910] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:52.239 15:14:38 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:52.239 15:14:38 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:52.239 15:14:38 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:52.239 15:14:38 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # (( 196608 == 196608 )) 00:06:52.239 15:14:38 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 60099 00:06:52.239 15:14:38 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 60099 ']' 00:06:52.239 15:14:38 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@958 -- # kill -0 60099 00:06:52.239 15:14:38 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@959 -- # uname 00:06:52.239 15:14:38 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:52.239 15:14:38 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60099 00:06:52.239 killing process with pid 60099 00:06:52.239 15:14:38 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:52.239 15:14:38 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:52.239 15:14:38 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60099' 00:06:52.239 15:14:38 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@973 -- # kill 60099 00:06:52.239 [2024-11-20 15:14:38.597612] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:52.239 [2024-11-20 15:14:38.597729] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:52.239 [2024-11-20 15:14:38.597788] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:52.239 [2024-11-20 15:14:38.597800] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:06:52.239 15:14:38 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@978 -- # wait 60099 00:06:53.693 [2024-11-20 15:14:40.097998] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:55.133 15:14:41 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:06:55.133 00:06:55.133 real 0m4.894s 00:06:55.133 user 0m5.165s 00:06:55.133 sys 0m0.627s 00:06:55.133 15:14:41 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:55.133 15:14:41 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.133 ************************************ 00:06:55.133 END TEST raid1_resize_superblock_test 00:06:55.133 ************************************ 00:06:55.133 15:14:41 bdev_raid -- bdev/bdev_raid.sh@956 -- # uname -s 00:06:55.133 15:14:41 bdev_raid -- bdev/bdev_raid.sh@956 -- # '[' Linux = Linux ']' 00:06:55.133 15:14:41 bdev_raid -- bdev/bdev_raid.sh@956 -- # modprobe -n nbd 00:06:55.133 15:14:41 bdev_raid -- bdev/bdev_raid.sh@957 -- # has_nbd=true 00:06:55.133 15:14:41 bdev_raid -- bdev/bdev_raid.sh@958 -- # modprobe nbd 00:06:55.133 15:14:41 bdev_raid -- bdev/bdev_raid.sh@959 -- # run_test raid_function_test_raid0 raid_function_test raid0 00:06:55.133 15:14:41 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:55.133 15:14:41 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:55.133 15:14:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:55.133 ************************************ 00:06:55.133 START TEST raid_function_test_raid0 00:06:55.133 ************************************ 00:06:55.133 15:14:41 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1129 -- # raid_function_test raid0 00:06:55.133 15:14:41 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@64 -- # local raid_level=raid0 00:06:55.133 15:14:41 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:06:55.133 15:14:41 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:06:55.133 15:14:41 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@69 -- # raid_pid=60197 00:06:55.133 Process raid pid: 60197 00:06:55.133 15:14:41 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 60197' 00:06:55.133 15:14:41 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:55.133 15:14:41 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@71 -- # waitforlisten 60197 00:06:55.133 15:14:41 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@835 -- # '[' -z 60197 ']' 00:06:55.133 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:55.133 15:14:41 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:55.133 15:14:41 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:55.133 15:14:41 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:55.133 15:14:41 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:55.133 15:14:41 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:55.133 [2024-11-20 15:14:41.515744] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:06:55.133 [2024-11-20 15:14:41.515893] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:55.392 [2024-11-20 15:14:41.711911] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.392 [2024-11-20 15:14:41.849837] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.650 [2024-11-20 15:14:42.080566] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:55.650 [2024-11-20 15:14:42.080852] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:56.216 15:14:42 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:56.216 15:14:42 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@868 -- # return 0 00:06:56.216 15:14:42 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:06:56.216 15:14:42 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.216 15:14:42 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:56.217 Base_1 00:06:56.217 15:14:42 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.217 15:14:42 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:06:56.217 15:14:42 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.217 15:14:42 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:56.217 Base_2 00:06:56.217 15:14:42 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.217 15:14:42 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''Base_1 Base_2'\''' -n raid 00:06:56.217 15:14:42 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.217 15:14:42 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:56.217 [2024-11-20 15:14:42.494515] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:06:56.217 [2024-11-20 15:14:42.496831] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:06:56.217 [2024-11-20 15:14:42.496911] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:06:56.217 [2024-11-20 15:14:42.496928] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:06:56.217 [2024-11-20 15:14:42.497236] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:06:56.217 [2024-11-20 15:14:42.497390] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:06:56.217 [2024-11-20 15:14:42.497401] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:06:56.217 [2024-11-20 15:14:42.497568] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:56.217 15:14:42 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.217 15:14:42 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:06:56.217 15:14:42 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.217 15:14:42 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:56.217 15:14:42 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:06:56.217 15:14:42 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.217 15:14:42 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:06:56.217 15:14:42 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:06:56.217 15:14:42 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:06:56.217 15:14:42 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:06:56.217 15:14:42 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:06:56.217 15:14:42 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:56.217 15:14:42 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:06:56.217 15:14:42 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:56.217 15:14:42 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@12 -- # local i 00:06:56.217 15:14:42 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:56.217 15:14:42 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:06:56.217 15:14:42 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:06:56.476 [2024-11-20 15:14:42.758191] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:06:56.476 /dev/nbd0 00:06:56.476 15:14:42 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:56.476 15:14:42 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:56.476 15:14:42 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:56.476 15:14:42 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@873 -- # local i 00:06:56.476 15:14:42 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:56.476 15:14:42 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:56.476 15:14:42 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:56.476 15:14:42 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@877 -- # break 00:06:56.476 15:14:42 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:56.476 15:14:42 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:56.476 15:14:42 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:56.476 1+0 records in 00:06:56.476 1+0 records out 00:06:56.476 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000403545 s, 10.2 MB/s 00:06:56.476 15:14:42 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:56.476 15:14:42 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@890 -- # size=4096 00:06:56.476 15:14:42 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:56.476 15:14:42 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:56.476 15:14:42 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@893 -- # return 0 00:06:56.476 15:14:42 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:56.476 15:14:42 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:06:56.476 15:14:42 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:06:56.476 15:14:42 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:06:56.476 15:14:42 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:06:56.735 15:14:43 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:56.735 { 00:06:56.735 "nbd_device": "/dev/nbd0", 00:06:56.735 "bdev_name": "raid" 00:06:56.735 } 00:06:56.735 ]' 00:06:56.735 15:14:43 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:56.735 15:14:43 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:56.735 { 00:06:56.735 "nbd_device": "/dev/nbd0", 00:06:56.735 "bdev_name": "raid" 00:06:56.735 } 00:06:56.735 ]' 00:06:56.735 15:14:43 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:06:56.735 15:14:43 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:56.735 15:14:43 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:06:56.735 15:14:43 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=1 00:06:56.735 15:14:43 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 1 00:06:56.735 15:14:43 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # count=1 00:06:56.735 15:14:43 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:06:56.735 15:14:43 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:06:56.735 15:14:43 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:06:56.735 15:14:43 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:06:56.735 15:14:43 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@19 -- # local blksize 00:06:56.735 15:14:43 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:06:56.735 15:14:43 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:06:56.735 15:14:43 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:06:56.735 15:14:43 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # blksize=512 00:06:56.735 15:14:43 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:06:56.735 15:14:43 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:06:56.735 15:14:43 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:06:56.735 15:14:43 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:06:56.735 15:14:43 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:06:56.735 15:14:43 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:06:56.735 15:14:43 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:06:56.735 15:14:43 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:06:56.735 15:14:43 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:06:56.735 4096+0 records in 00:06:56.735 4096+0 records out 00:06:56.735 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0376827 s, 55.7 MB/s 00:06:56.735 15:14:43 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:06:56.993 4096+0 records in 00:06:56.993 4096+0 records out 00:06:56.993 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.253469 s, 8.3 MB/s 00:06:56.993 15:14:43 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:06:56.993 15:14:43 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:56.993 15:14:43 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:06:56.993 15:14:43 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:56.993 15:14:43 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:06:56.993 15:14:43 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:06:56.993 15:14:43 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:06:56.993 128+0 records in 00:06:56.993 128+0 records out 00:06:56.993 65536 bytes (66 kB, 64 KiB) copied, 0.000831064 s, 78.9 MB/s 00:06:56.993 15:14:43 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:06:56.993 15:14:43 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:06:56.993 15:14:43 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:57.251 15:14:43 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:06:57.251 15:14:43 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:57.251 15:14:43 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:06:57.251 15:14:43 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:06:57.251 15:14:43 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:06:57.251 2035+0 records in 00:06:57.251 2035+0 records out 00:06:57.251 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0199999 s, 52.1 MB/s 00:06:57.251 15:14:43 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:06:57.251 15:14:43 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:06:57.251 15:14:43 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:57.251 15:14:43 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:06:57.251 15:14:43 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:57.251 15:14:43 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:06:57.251 15:14:43 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:06:57.251 15:14:43 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:06:57.251 456+0 records in 00:06:57.251 456+0 records out 00:06:57.251 233472 bytes (233 kB, 228 KiB) copied, 0.0054465 s, 42.9 MB/s 00:06:57.251 15:14:43 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:06:57.251 15:14:43 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:06:57.251 15:14:43 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:57.251 15:14:43 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:06:57.251 15:14:43 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:57.251 15:14:43 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@52 -- # return 0 00:06:57.251 15:14:43 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:06:57.252 15:14:43 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:06:57.252 15:14:43 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:06:57.252 15:14:43 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:57.252 15:14:43 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@51 -- # local i 00:06:57.252 15:14:43 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:57.252 15:14:43 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:06:57.510 15:14:43 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:57.510 [2024-11-20 15:14:43.818033] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:57.510 15:14:43 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:57.510 15:14:43 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:57.510 15:14:43 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:57.510 15:14:43 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:57.510 15:14:43 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:57.510 15:14:43 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@41 -- # break 00:06:57.510 15:14:43 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@45 -- # return 0 00:06:57.510 15:14:43 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:06:57.510 15:14:43 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:06:57.510 15:14:43 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:06:57.768 15:14:44 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:57.768 15:14:44 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:57.768 15:14:44 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:57.768 15:14:44 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:57.768 15:14:44 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:57.768 15:14:44 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:57.768 15:14:44 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # true 00:06:57.768 15:14:44 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=0 00:06:57.768 15:14:44 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:57.768 15:14:44 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # count=0 00:06:57.768 15:14:44 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:06:57.768 15:14:44 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@97 -- # killprocess 60197 00:06:57.768 15:14:44 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@954 -- # '[' -z 60197 ']' 00:06:57.768 15:14:44 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@958 -- # kill -0 60197 00:06:57.768 15:14:44 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@959 -- # uname 00:06:57.768 15:14:44 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:57.768 15:14:44 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60197 00:06:57.768 15:14:44 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:57.768 killing process with pid 60197 00:06:57.768 15:14:44 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:57.768 15:14:44 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60197' 00:06:57.768 15:14:44 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@973 -- # kill 60197 00:06:57.768 [2024-11-20 15:14:44.197799] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:57.768 15:14:44 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@978 -- # wait 60197 00:06:57.768 [2024-11-20 15:14:44.197975] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:57.768 [2024-11-20 15:14:44.198077] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:57.768 [2024-11-20 15:14:44.198100] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:06:58.026 [2024-11-20 15:14:44.411226] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:59.403 15:14:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@99 -- # return 0 00:06:59.403 00:06:59.403 real 0m4.219s 00:06:59.403 user 0m4.828s 00:06:59.403 sys 0m1.132s 00:06:59.403 ************************************ 00:06:59.403 END TEST raid_function_test_raid0 00:06:59.403 ************************************ 00:06:59.403 15:14:45 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:59.403 15:14:45 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:59.403 15:14:45 bdev_raid -- bdev/bdev_raid.sh@960 -- # run_test raid_function_test_concat raid_function_test concat 00:06:59.403 15:14:45 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:59.403 15:14:45 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:59.403 15:14:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:59.403 ************************************ 00:06:59.403 START TEST raid_function_test_concat 00:06:59.403 ************************************ 00:06:59.403 15:14:45 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1129 -- # raid_function_test concat 00:06:59.403 15:14:45 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@64 -- # local raid_level=concat 00:06:59.403 15:14:45 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:06:59.403 15:14:45 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:06:59.403 15:14:45 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:59.403 15:14:45 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@69 -- # raid_pid=60332 00:06:59.403 Process raid pid: 60332 00:06:59.403 15:14:45 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 60332' 00:06:59.403 15:14:45 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@71 -- # waitforlisten 60332 00:06:59.403 15:14:45 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@835 -- # '[' -z 60332 ']' 00:06:59.403 15:14:45 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:59.403 15:14:45 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:59.403 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:59.403 15:14:45 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:59.403 15:14:45 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:59.403 15:14:45 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:06:59.403 [2024-11-20 15:14:45.797813] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:06:59.403 [2024-11-20 15:14:45.797977] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:59.662 [2024-11-20 15:14:46.026936] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.920 [2024-11-20 15:14:46.155266] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.921 [2024-11-20 15:14:46.386542] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:59.921 [2024-11-20 15:14:46.386591] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:00.489 15:14:46 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:00.489 15:14:46 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@868 -- # return 0 00:07:00.489 15:14:46 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:07:00.489 15:14:46 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:00.489 15:14:46 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:00.489 Base_1 00:07:00.489 15:14:46 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:00.489 15:14:46 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:07:00.489 15:14:46 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:00.489 15:14:46 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:00.489 Base_2 00:07:00.489 15:14:46 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:00.489 15:14:46 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''Base_1 Base_2'\''' -n raid 00:07:00.489 15:14:46 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:00.489 15:14:46 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:00.489 [2024-11-20 15:14:46.766102] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:07:00.489 [2024-11-20 15:14:46.768144] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:07:00.489 [2024-11-20 15:14:46.768218] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:00.489 [2024-11-20 15:14:46.768232] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:00.489 [2024-11-20 15:14:46.768496] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:00.489 [2024-11-20 15:14:46.768637] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:00.489 [2024-11-20 15:14:46.768666] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:07:00.489 [2024-11-20 15:14:46.768803] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:00.489 15:14:46 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:00.489 15:14:46 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:07:00.489 15:14:46 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:00.489 15:14:46 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:07:00.489 15:14:46 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:00.489 15:14:46 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:00.489 15:14:46 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:07:00.489 15:14:46 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:07:00.489 15:14:46 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:07:00.489 15:14:46 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:07:00.489 15:14:46 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:07:00.489 15:14:46 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:00.489 15:14:46 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:07:00.489 15:14:46 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:00.489 15:14:46 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@12 -- # local i 00:07:00.489 15:14:46 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:00.489 15:14:46 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:07:00.489 15:14:46 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:07:00.749 [2024-11-20 15:14:47.005862] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:07:00.749 /dev/nbd0 00:07:00.749 15:14:47 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:00.749 15:14:47 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:00.749 15:14:47 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:00.749 15:14:47 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@873 -- # local i 00:07:00.749 15:14:47 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:00.749 15:14:47 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:00.749 15:14:47 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:00.749 15:14:47 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@877 -- # break 00:07:00.749 15:14:47 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:00.749 15:14:47 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:00.749 15:14:47 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:00.749 1+0 records in 00:07:00.749 1+0 records out 00:07:00.749 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000444643 s, 9.2 MB/s 00:07:00.749 15:14:47 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:00.749 15:14:47 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@890 -- # size=4096 00:07:00.749 15:14:47 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:00.749 15:14:47 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:00.749 15:14:47 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@893 -- # return 0 00:07:00.749 15:14:47 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:00.749 15:14:47 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:07:00.749 15:14:47 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:07:00.750 15:14:47 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:07:00.750 15:14:47 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:07:01.009 15:14:47 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:01.009 { 00:07:01.009 "nbd_device": "/dev/nbd0", 00:07:01.009 "bdev_name": "raid" 00:07:01.009 } 00:07:01.009 ]' 00:07:01.009 15:14:47 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:01.009 { 00:07:01.009 "nbd_device": "/dev/nbd0", 00:07:01.009 "bdev_name": "raid" 00:07:01.009 } 00:07:01.009 ]' 00:07:01.009 15:14:47 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:01.009 15:14:47 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:07:01.009 15:14:47 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:07:01.009 15:14:47 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:01.009 15:14:47 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=1 00:07:01.009 15:14:47 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 1 00:07:01.009 15:14:47 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # count=1 00:07:01.009 15:14:47 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:07:01.009 15:14:47 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:07:01.009 15:14:47 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:07:01.009 15:14:47 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:07:01.009 15:14:47 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@19 -- # local blksize 00:07:01.009 15:14:47 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:07:01.009 15:14:47 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:07:01.009 15:14:47 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:07:01.009 15:14:47 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # blksize=512 00:07:01.009 15:14:47 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:07:01.009 15:14:47 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:07:01.009 15:14:47 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:07:01.009 15:14:47 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:07:01.009 15:14:47 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:07:01.009 15:14:47 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:07:01.009 15:14:47 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:07:01.009 15:14:47 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:07:01.009 15:14:47 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:07:01.009 4096+0 records in 00:07:01.009 4096+0 records out 00:07:01.009 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0361863 s, 58.0 MB/s 00:07:01.009 15:14:47 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:07:01.268 4096+0 records in 00:07:01.268 4096+0 records out 00:07:01.268 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.237442 s, 8.8 MB/s 00:07:01.268 15:14:47 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:07:01.268 15:14:47 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:01.268 15:14:47 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:07:01.268 15:14:47 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:01.268 15:14:47 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:07:01.268 15:14:47 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:07:01.268 15:14:47 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:07:01.268 128+0 records in 00:07:01.268 128+0 records out 00:07:01.268 65536 bytes (66 kB, 64 KiB) copied, 0.00204896 s, 32.0 MB/s 00:07:01.268 15:14:47 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:07:01.268 15:14:47 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:01.268 15:14:47 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:01.268 15:14:47 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:01.268 15:14:47 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:01.268 15:14:47 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:07:01.268 15:14:47 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:07:01.268 15:14:47 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:07:01.268 2035+0 records in 00:07:01.268 2035+0 records out 00:07:01.268 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0187717 s, 55.5 MB/s 00:07:01.268 15:14:47 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:07:01.268 15:14:47 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:01.268 15:14:47 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:01.527 15:14:47 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:01.527 15:14:47 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:01.527 15:14:47 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:07:01.527 15:14:47 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:07:01.527 15:14:47 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:07:01.527 456+0 records in 00:07:01.527 456+0 records out 00:07:01.527 233472 bytes (233 kB, 228 KiB) copied, 0.00562697 s, 41.5 MB/s 00:07:01.527 15:14:47 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:07:01.527 15:14:47 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:01.527 15:14:47 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:01.527 15:14:47 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:01.527 15:14:47 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:01.527 15:14:47 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@52 -- # return 0 00:07:01.527 15:14:47 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:07:01.527 15:14:47 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:07:01.527 15:14:47 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:01.527 15:14:47 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:01.527 15:14:47 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@51 -- # local i 00:07:01.527 15:14:47 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:01.527 15:14:47 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:07:01.786 [2024-11-20 15:14:48.017359] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:01.786 15:14:48 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:01.786 15:14:48 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:01.786 15:14:48 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:01.786 15:14:48 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:01.786 15:14:48 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:01.786 15:14:48 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:01.786 15:14:48 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@41 -- # break 00:07:01.786 15:14:48 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@45 -- # return 0 00:07:01.786 15:14:48 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:07:01.786 15:14:48 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:07:01.786 15:14:48 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:07:01.786 15:14:48 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:02.046 15:14:48 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:02.046 15:14:48 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:02.046 15:14:48 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:02.046 15:14:48 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:02.046 15:14:48 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:02.046 15:14:48 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # true 00:07:02.046 15:14:48 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=0 00:07:02.046 15:14:48 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:02.046 15:14:48 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # count=0 00:07:02.046 15:14:48 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:07:02.046 15:14:48 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@97 -- # killprocess 60332 00:07:02.046 15:14:48 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@954 -- # '[' -z 60332 ']' 00:07:02.046 15:14:48 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@958 -- # kill -0 60332 00:07:02.046 15:14:48 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@959 -- # uname 00:07:02.046 15:14:48 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:02.046 15:14:48 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60332 00:07:02.046 15:14:48 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:02.046 15:14:48 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:02.046 15:14:48 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60332' 00:07:02.046 killing process with pid 60332 00:07:02.046 15:14:48 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@973 -- # kill 60332 00:07:02.046 [2024-11-20 15:14:48.379011] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:02.046 [2024-11-20 15:14:48.379116] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:02.046 15:14:48 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@978 -- # wait 60332 00:07:02.046 [2024-11-20 15:14:48.379168] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:02.046 [2024-11-20 15:14:48.379182] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:07:02.305 [2024-11-20 15:14:48.591873] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:03.679 15:14:49 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@99 -- # return 0 00:07:03.679 00:07:03.679 real 0m4.057s 00:07:03.679 user 0m4.623s 00:07:03.679 sys 0m1.138s 00:07:03.679 15:14:49 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:03.679 15:14:49 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:03.679 ************************************ 00:07:03.680 END TEST raid_function_test_concat 00:07:03.680 ************************************ 00:07:03.680 15:14:49 bdev_raid -- bdev/bdev_raid.sh@963 -- # run_test raid0_resize_test raid_resize_test 0 00:07:03.680 15:14:49 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:03.680 15:14:49 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:03.680 15:14:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:03.680 ************************************ 00:07:03.680 START TEST raid0_resize_test 00:07:03.680 ************************************ 00:07:03.680 15:14:49 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1129 -- # raid_resize_test 0 00:07:03.680 15:14:49 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=0 00:07:03.680 15:14:49 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:07:03.680 15:14:49 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:07:03.680 15:14:49 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:07:03.680 15:14:49 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:07:03.680 15:14:49 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:07:03.680 15:14:49 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:07:03.680 15:14:49 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:07:03.680 15:14:49 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=60461 00:07:03.680 Process raid pid: 60461 00:07:03.680 15:14:49 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 60461' 00:07:03.680 15:14:49 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:03.680 15:14:49 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 60461 00:07:03.680 15:14:49 bdev_raid.raid0_resize_test -- common/autotest_common.sh@835 -- # '[' -z 60461 ']' 00:07:03.680 15:14:49 bdev_raid.raid0_resize_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:03.680 15:14:49 bdev_raid.raid0_resize_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:03.680 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:03.680 15:14:49 bdev_raid.raid0_resize_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:03.680 15:14:49 bdev_raid.raid0_resize_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:03.680 15:14:49 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.680 [2024-11-20 15:14:49.921946] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:07:03.680 [2024-11-20 15:14:49.922077] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:03.680 [2024-11-20 15:14:50.105049] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.938 [2024-11-20 15:14:50.233280] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.196 [2024-11-20 15:14:50.455049] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:04.196 [2024-11-20 15:14:50.455103] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:04.455 15:14:50 bdev_raid.raid0_resize_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:04.455 15:14:50 bdev_raid.raid0_resize_test -- common/autotest_common.sh@868 -- # return 0 00:07:04.455 15:14:50 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:07:04.455 15:14:50 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:04.455 15:14:50 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:04.455 Base_1 00:07:04.455 15:14:50 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:04.455 15:14:50 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:07:04.455 15:14:50 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:04.455 15:14:50 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:04.455 Base_2 00:07:04.455 15:14:50 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:04.455 15:14:50 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 0 -eq 0 ']' 00:07:04.455 15:14:50 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@350 -- # rpc_cmd bdev_raid_create -z 64 -r 0 -b ''\''Base_1 Base_2'\''' -n Raid 00:07:04.455 15:14:50 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:04.455 15:14:50 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:04.455 [2024-11-20 15:14:50.911556] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:07:04.455 [2024-11-20 15:14:50.913764] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:07:04.455 [2024-11-20 15:14:50.913824] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:04.455 [2024-11-20 15:14:50.913838] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:04.456 [2024-11-20 15:14:50.914100] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:04.456 [2024-11-20 15:14:50.914207] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:04.456 [2024-11-20 15:14:50.914218] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:07:04.456 [2024-11-20 15:14:50.914350] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:04.456 15:14:50 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:04.456 15:14:50 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:07:04.456 15:14:50 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:04.456 15:14:50 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:04.456 [2024-11-20 15:14:50.923519] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:04.456 [2024-11-20 15:14:50.923553] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:07:04.456 true 00:07:04.456 15:14:50 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:04.456 15:14:50 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:04.456 15:14:50 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:07:04.456 15:14:50 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:04.456 15:14:50 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:04.715 [2024-11-20 15:14:50.939702] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:04.715 15:14:50 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:04.715 15:14:50 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=131072 00:07:04.715 15:14:50 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=64 00:07:04.715 15:14:50 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 0 -eq 0 ']' 00:07:04.715 15:14:50 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@362 -- # expected_size=64 00:07:04.715 15:14:50 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 64 '!=' 64 ']' 00:07:04.715 15:14:50 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:07:04.715 15:14:50 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:04.715 15:14:50 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:04.715 [2024-11-20 15:14:50.979523] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:04.715 [2024-11-20 15:14:50.979554] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:07:04.715 [2024-11-20 15:14:50.979591] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 131072 to 262144 00:07:04.715 true 00:07:04.715 15:14:50 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:04.715 15:14:50 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:04.715 15:14:50 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:07:04.715 15:14:50 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:04.715 15:14:50 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:04.715 [2024-11-20 15:14:50.995705] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:04.715 15:14:51 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:04.715 15:14:51 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=262144 00:07:04.715 15:14:51 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=128 00:07:04.715 15:14:51 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 0 -eq 0 ']' 00:07:04.715 15:14:51 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@378 -- # expected_size=128 00:07:04.715 15:14:51 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 128 '!=' 128 ']' 00:07:04.715 15:14:51 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 60461 00:07:04.715 15:14:51 bdev_raid.raid0_resize_test -- common/autotest_common.sh@954 -- # '[' -z 60461 ']' 00:07:04.715 15:14:51 bdev_raid.raid0_resize_test -- common/autotest_common.sh@958 -- # kill -0 60461 00:07:04.715 15:14:51 bdev_raid.raid0_resize_test -- common/autotest_common.sh@959 -- # uname 00:07:04.715 15:14:51 bdev_raid.raid0_resize_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:04.715 15:14:51 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60461 00:07:04.715 15:14:51 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:04.715 killing process with pid 60461 00:07:04.715 15:14:51 bdev_raid.raid0_resize_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:04.715 15:14:51 bdev_raid.raid0_resize_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60461' 00:07:04.715 15:14:51 bdev_raid.raid0_resize_test -- common/autotest_common.sh@973 -- # kill 60461 00:07:04.715 [2024-11-20 15:14:51.077588] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:04.715 [2024-11-20 15:14:51.077706] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:04.715 15:14:51 bdev_raid.raid0_resize_test -- common/autotest_common.sh@978 -- # wait 60461 00:07:04.715 [2024-11-20 15:14:51.077757] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:04.715 [2024-11-20 15:14:51.077768] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:07:04.715 [2024-11-20 15:14:51.095026] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:06.090 15:14:52 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:07:06.090 00:07:06.090 real 0m2.439s 00:07:06.090 user 0m2.652s 00:07:06.090 sys 0m0.421s 00:07:06.090 ************************************ 00:07:06.090 END TEST raid0_resize_test 00:07:06.090 ************************************ 00:07:06.090 15:14:52 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:06.090 15:14:52 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:06.090 15:14:52 bdev_raid -- bdev/bdev_raid.sh@964 -- # run_test raid1_resize_test raid_resize_test 1 00:07:06.090 15:14:52 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:06.090 15:14:52 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:06.090 15:14:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:06.090 ************************************ 00:07:06.090 START TEST raid1_resize_test 00:07:06.090 ************************************ 00:07:06.090 15:14:52 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1129 -- # raid_resize_test 1 00:07:06.090 15:14:52 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=1 00:07:06.090 15:14:52 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:07:06.090 15:14:52 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:07:06.090 15:14:52 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:07:06.090 15:14:52 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:07:06.090 15:14:52 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:07:06.090 15:14:52 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:07:06.090 15:14:52 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:07:06.090 15:14:52 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=60517 00:07:06.090 Process raid pid: 60517 00:07:06.091 15:14:52 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 60517' 00:07:06.091 15:14:52 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 60517 00:07:06.091 15:14:52 bdev_raid.raid1_resize_test -- common/autotest_common.sh@835 -- # '[' -z 60517 ']' 00:07:06.091 15:14:52 bdev_raid.raid1_resize_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:06.091 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:06.091 15:14:52 bdev_raid.raid1_resize_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:06.091 15:14:52 bdev_raid.raid1_resize_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:06.091 15:14:52 bdev_raid.raid1_resize_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:06.091 15:14:52 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:06.091 15:14:52 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:06.091 [2024-11-20 15:14:52.423704] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:07:06.091 [2024-11-20 15:14:52.423820] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:06.349 [2024-11-20 15:14:52.591184] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.349 [2024-11-20 15:14:52.705974] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.608 [2024-11-20 15:14:52.905469] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:06.608 [2024-11-20 15:14:52.905506] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:07.174 15:14:53 bdev_raid.raid1_resize_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:07.174 15:14:53 bdev_raid.raid1_resize_test -- common/autotest_common.sh@868 -- # return 0 00:07:07.174 15:14:53 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:07:07.174 15:14:53 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.174 15:14:53 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.174 Base_1 00:07:07.174 15:14:53 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.174 15:14:53 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:07:07.174 15:14:53 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.174 15:14:53 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.174 Base_2 00:07:07.174 15:14:53 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.174 15:14:53 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 1 -eq 0 ']' 00:07:07.174 15:14:53 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@352 -- # rpc_cmd bdev_raid_create -r 1 -b ''\''Base_1 Base_2'\''' -n Raid 00:07:07.174 15:14:53 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.174 15:14:53 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.174 [2024-11-20 15:14:53.391545] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:07:07.174 [2024-11-20 15:14:53.393612] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:07:07.174 [2024-11-20 15:14:53.393691] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:07.174 [2024-11-20 15:14:53.393707] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:07:07.174 [2024-11-20 15:14:53.393982] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:07.174 [2024-11-20 15:14:53.394105] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:07.174 [2024-11-20 15:14:53.394115] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:07:07.174 [2024-11-20 15:14:53.394261] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:07.174 15:14:53 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.174 15:14:53 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:07:07.174 15:14:53 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.174 15:14:53 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.174 [2024-11-20 15:14:53.399563] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:07.174 [2024-11-20 15:14:53.399600] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:07:07.174 true 00:07:07.174 15:14:53 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.174 15:14:53 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:07.174 15:14:53 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.174 15:14:53 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:07:07.174 15:14:53 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.174 [2024-11-20 15:14:53.411722] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:07.174 15:14:53 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.174 15:14:53 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=65536 00:07:07.174 15:14:53 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=32 00:07:07.174 15:14:53 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 1 -eq 0 ']' 00:07:07.174 15:14:53 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@364 -- # expected_size=32 00:07:07.174 15:14:53 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 32 '!=' 32 ']' 00:07:07.174 15:14:53 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:07:07.174 15:14:53 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.174 15:14:53 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.174 [2024-11-20 15:14:53.459551] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:07.174 [2024-11-20 15:14:53.459584] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:07:07.174 [2024-11-20 15:14:53.459617] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 65536 to 131072 00:07:07.174 true 00:07:07.174 15:14:53 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.174 15:14:53 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:07.174 15:14:53 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.174 15:14:53 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.174 15:14:53 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:07:07.174 [2024-11-20 15:14:53.471715] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:07.174 15:14:53 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.174 15:14:53 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=131072 00:07:07.174 15:14:53 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=64 00:07:07.174 15:14:53 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 1 -eq 0 ']' 00:07:07.174 15:14:53 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@380 -- # expected_size=64 00:07:07.174 15:14:53 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 64 '!=' 64 ']' 00:07:07.174 15:14:53 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 60517 00:07:07.174 15:14:53 bdev_raid.raid1_resize_test -- common/autotest_common.sh@954 -- # '[' -z 60517 ']' 00:07:07.175 15:14:53 bdev_raid.raid1_resize_test -- common/autotest_common.sh@958 -- # kill -0 60517 00:07:07.175 15:14:53 bdev_raid.raid1_resize_test -- common/autotest_common.sh@959 -- # uname 00:07:07.175 15:14:53 bdev_raid.raid1_resize_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:07.175 15:14:53 bdev_raid.raid1_resize_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60517 00:07:07.175 15:14:53 bdev_raid.raid1_resize_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:07.175 15:14:53 bdev_raid.raid1_resize_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:07.175 15:14:53 bdev_raid.raid1_resize_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60517' 00:07:07.175 killing process with pid 60517 00:07:07.175 15:14:53 bdev_raid.raid1_resize_test -- common/autotest_common.sh@973 -- # kill 60517 00:07:07.175 [2024-11-20 15:14:53.558491] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:07.175 [2024-11-20 15:14:53.558601] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:07.175 15:14:53 bdev_raid.raid1_resize_test -- common/autotest_common.sh@978 -- # wait 60517 00:07:07.175 [2024-11-20 15:14:53.559108] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:07.175 [2024-11-20 15:14:53.559136] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:07:07.175 [2024-11-20 15:14:53.576901] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:08.550 15:14:54 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:07:08.550 00:07:08.550 real 0m2.429s 00:07:08.550 user 0m2.680s 00:07:08.550 sys 0m0.353s 00:07:08.550 15:14:54 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:08.550 ************************************ 00:07:08.550 END TEST raid1_resize_test 00:07:08.550 15:14:54 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.550 ************************************ 00:07:08.550 15:14:54 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:07:08.550 15:14:54 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:07:08.550 15:14:54 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:07:08.550 15:14:54 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:08.550 15:14:54 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:08.550 15:14:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:08.550 ************************************ 00:07:08.550 START TEST raid_state_function_test 00:07:08.550 ************************************ 00:07:08.550 15:14:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 2 false 00:07:08.550 15:14:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:07:08.550 15:14:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:08.550 15:14:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:07:08.550 15:14:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:08.550 15:14:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:08.550 15:14:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:08.550 15:14:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:08.550 15:14:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:08.550 15:14:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:08.550 15:14:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:08.550 15:14:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:08.550 15:14:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:08.550 15:14:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:08.550 15:14:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:08.550 15:14:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:08.550 15:14:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:08.550 15:14:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:08.550 15:14:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:08.550 15:14:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:07:08.550 15:14:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:08.550 15:14:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:08.550 15:14:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:07:08.550 15:14:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:07:08.550 15:14:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=60579 00:07:08.550 15:14:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:08.550 15:14:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 60579' 00:07:08.550 Process raid pid: 60579 00:07:08.550 15:14:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 60579 00:07:08.550 15:14:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 60579 ']' 00:07:08.550 15:14:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:08.550 15:14:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:08.550 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:08.550 15:14:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:08.550 15:14:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:08.550 15:14:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.551 [2024-11-20 15:14:54.909315] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:07:08.551 [2024-11-20 15:14:54.909480] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:08.809 [2024-11-20 15:14:55.110639] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.809 [2024-11-20 15:14:55.249286] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.073 [2024-11-20 15:14:55.465064] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:09.073 [2024-11-20 15:14:55.465115] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:09.332 15:14:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:09.332 15:14:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:07:09.332 15:14:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:09.332 15:14:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:09.332 15:14:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.332 [2024-11-20 15:14:55.802367] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:09.332 [2024-11-20 15:14:55.802425] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:09.332 [2024-11-20 15:14:55.802438] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:09.332 [2024-11-20 15:14:55.802451] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:09.332 15:14:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:09.332 15:14:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:09.332 15:14:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:09.332 15:14:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:09.332 15:14:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:09.332 15:14:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:09.332 15:14:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:09.332 15:14:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:09.332 15:14:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:09.332 15:14:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:09.332 15:14:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:09.332 15:14:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:09.332 15:14:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:09.332 15:14:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.332 15:14:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:09.590 15:14:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:09.590 15:14:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:09.590 "name": "Existed_Raid", 00:07:09.590 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:09.590 "strip_size_kb": 64, 00:07:09.590 "state": "configuring", 00:07:09.590 "raid_level": "raid0", 00:07:09.590 "superblock": false, 00:07:09.590 "num_base_bdevs": 2, 00:07:09.590 "num_base_bdevs_discovered": 0, 00:07:09.590 "num_base_bdevs_operational": 2, 00:07:09.590 "base_bdevs_list": [ 00:07:09.590 { 00:07:09.590 "name": "BaseBdev1", 00:07:09.590 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:09.590 "is_configured": false, 00:07:09.590 "data_offset": 0, 00:07:09.590 "data_size": 0 00:07:09.590 }, 00:07:09.590 { 00:07:09.590 "name": "BaseBdev2", 00:07:09.590 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:09.590 "is_configured": false, 00:07:09.590 "data_offset": 0, 00:07:09.590 "data_size": 0 00:07:09.590 } 00:07:09.590 ] 00:07:09.590 }' 00:07:09.590 15:14:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:09.590 15:14:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.848 15:14:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:09.849 15:14:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:09.849 15:14:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.849 [2024-11-20 15:14:56.201840] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:09.849 [2024-11-20 15:14:56.201887] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:09.849 15:14:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:09.849 15:14:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:09.849 15:14:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:09.849 15:14:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.849 [2024-11-20 15:14:56.209808] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:09.849 [2024-11-20 15:14:56.209855] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:09.849 [2024-11-20 15:14:56.209866] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:09.849 [2024-11-20 15:14:56.209882] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:09.849 15:14:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:09.849 15:14:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:09.849 15:14:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:09.849 15:14:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.849 [2024-11-20 15:14:56.252814] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:09.849 BaseBdev1 00:07:09.849 15:14:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:09.849 15:14:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:09.849 15:14:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:07:09.849 15:14:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:09.849 15:14:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:09.849 15:14:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:09.849 15:14:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:09.849 15:14:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:09.849 15:14:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:09.849 15:14:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.849 15:14:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:09.849 15:14:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:09.849 15:14:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:09.849 15:14:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.849 [ 00:07:09.849 { 00:07:09.849 "name": "BaseBdev1", 00:07:09.849 "aliases": [ 00:07:09.849 "3d2e5ede-acca-4e35-9320-185f79899cac" 00:07:09.849 ], 00:07:09.849 "product_name": "Malloc disk", 00:07:09.849 "block_size": 512, 00:07:09.849 "num_blocks": 65536, 00:07:09.849 "uuid": "3d2e5ede-acca-4e35-9320-185f79899cac", 00:07:09.849 "assigned_rate_limits": { 00:07:09.849 "rw_ios_per_sec": 0, 00:07:09.849 "rw_mbytes_per_sec": 0, 00:07:09.849 "r_mbytes_per_sec": 0, 00:07:09.849 "w_mbytes_per_sec": 0 00:07:09.849 }, 00:07:09.849 "claimed": true, 00:07:09.849 "claim_type": "exclusive_write", 00:07:09.849 "zoned": false, 00:07:09.849 "supported_io_types": { 00:07:09.849 "read": true, 00:07:09.849 "write": true, 00:07:09.849 "unmap": true, 00:07:09.849 "flush": true, 00:07:09.849 "reset": true, 00:07:09.849 "nvme_admin": false, 00:07:09.849 "nvme_io": false, 00:07:09.849 "nvme_io_md": false, 00:07:09.849 "write_zeroes": true, 00:07:09.849 "zcopy": true, 00:07:09.849 "get_zone_info": false, 00:07:09.849 "zone_management": false, 00:07:09.849 "zone_append": false, 00:07:09.849 "compare": false, 00:07:09.849 "compare_and_write": false, 00:07:09.849 "abort": true, 00:07:09.849 "seek_hole": false, 00:07:09.849 "seek_data": false, 00:07:09.849 "copy": true, 00:07:09.849 "nvme_iov_md": false 00:07:09.849 }, 00:07:09.849 "memory_domains": [ 00:07:09.849 { 00:07:09.849 "dma_device_id": "system", 00:07:09.849 "dma_device_type": 1 00:07:09.849 }, 00:07:09.849 { 00:07:09.849 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:09.849 "dma_device_type": 2 00:07:09.849 } 00:07:09.849 ], 00:07:09.849 "driver_specific": {} 00:07:09.849 } 00:07:09.849 ] 00:07:09.849 15:14:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:09.849 15:14:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:09.849 15:14:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:09.849 15:14:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:09.849 15:14:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:09.849 15:14:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:09.849 15:14:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:09.849 15:14:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:09.849 15:14:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:09.849 15:14:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:09.849 15:14:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:09.849 15:14:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:09.849 15:14:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:09.849 15:14:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:09.849 15:14:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:09.849 15:14:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.849 15:14:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:09.849 15:14:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:09.849 "name": "Existed_Raid", 00:07:09.849 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:09.849 "strip_size_kb": 64, 00:07:09.849 "state": "configuring", 00:07:09.849 "raid_level": "raid0", 00:07:09.849 "superblock": false, 00:07:09.849 "num_base_bdevs": 2, 00:07:09.849 "num_base_bdevs_discovered": 1, 00:07:09.849 "num_base_bdevs_operational": 2, 00:07:09.849 "base_bdevs_list": [ 00:07:09.849 { 00:07:09.849 "name": "BaseBdev1", 00:07:09.849 "uuid": "3d2e5ede-acca-4e35-9320-185f79899cac", 00:07:09.849 "is_configured": true, 00:07:09.849 "data_offset": 0, 00:07:09.849 "data_size": 65536 00:07:09.849 }, 00:07:09.849 { 00:07:09.849 "name": "BaseBdev2", 00:07:09.849 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:09.849 "is_configured": false, 00:07:09.849 "data_offset": 0, 00:07:09.849 "data_size": 0 00:07:09.849 } 00:07:09.849 ] 00:07:09.849 }' 00:07:09.849 15:14:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:09.849 15:14:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.416 15:14:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:10.416 15:14:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:10.416 15:14:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.416 [2024-11-20 15:14:56.648796] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:10.416 [2024-11-20 15:14:56.648856] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:07:10.416 15:14:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:10.416 15:14:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:10.416 15:14:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:10.416 15:14:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.416 [2024-11-20 15:14:56.656841] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:10.416 [2024-11-20 15:14:56.658920] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:10.416 [2024-11-20 15:14:56.658968] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:10.416 15:14:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:10.416 15:14:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:10.416 15:14:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:10.416 15:14:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:10.416 15:14:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:10.416 15:14:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:10.416 15:14:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:10.416 15:14:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:10.416 15:14:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:10.416 15:14:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:10.416 15:14:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:10.416 15:14:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:10.416 15:14:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:10.416 15:14:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:10.416 15:14:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:10.416 15:14:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:10.416 15:14:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.416 15:14:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:10.416 15:14:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:10.416 "name": "Existed_Raid", 00:07:10.416 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:10.416 "strip_size_kb": 64, 00:07:10.416 "state": "configuring", 00:07:10.416 "raid_level": "raid0", 00:07:10.416 "superblock": false, 00:07:10.416 "num_base_bdevs": 2, 00:07:10.416 "num_base_bdevs_discovered": 1, 00:07:10.416 "num_base_bdevs_operational": 2, 00:07:10.416 "base_bdevs_list": [ 00:07:10.416 { 00:07:10.416 "name": "BaseBdev1", 00:07:10.416 "uuid": "3d2e5ede-acca-4e35-9320-185f79899cac", 00:07:10.416 "is_configured": true, 00:07:10.416 "data_offset": 0, 00:07:10.416 "data_size": 65536 00:07:10.416 }, 00:07:10.416 { 00:07:10.416 "name": "BaseBdev2", 00:07:10.416 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:10.416 "is_configured": false, 00:07:10.416 "data_offset": 0, 00:07:10.416 "data_size": 0 00:07:10.416 } 00:07:10.416 ] 00:07:10.416 }' 00:07:10.416 15:14:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:10.416 15:14:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.675 15:14:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:10.675 15:14:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:10.675 15:14:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.675 [2024-11-20 15:14:57.086536] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:10.675 [2024-11-20 15:14:57.086600] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:10.675 [2024-11-20 15:14:57.086611] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:10.675 [2024-11-20 15:14:57.086916] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:10.675 [2024-11-20 15:14:57.087090] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:10.675 [2024-11-20 15:14:57.087105] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:07:10.675 [2024-11-20 15:14:57.087360] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:10.675 BaseBdev2 00:07:10.675 15:14:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:10.675 15:14:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:10.675 15:14:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:07:10.675 15:14:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:10.675 15:14:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:10.675 15:14:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:10.675 15:14:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:10.675 15:14:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:10.675 15:14:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:10.675 15:14:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.675 15:14:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:10.675 15:14:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:10.675 15:14:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:10.675 15:14:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.675 [ 00:07:10.675 { 00:07:10.675 "name": "BaseBdev2", 00:07:10.675 "aliases": [ 00:07:10.675 "4bd3cbfe-98f3-4241-8efb-97c6420d749b" 00:07:10.675 ], 00:07:10.675 "product_name": "Malloc disk", 00:07:10.676 "block_size": 512, 00:07:10.676 "num_blocks": 65536, 00:07:10.676 "uuid": "4bd3cbfe-98f3-4241-8efb-97c6420d749b", 00:07:10.676 "assigned_rate_limits": { 00:07:10.676 "rw_ios_per_sec": 0, 00:07:10.676 "rw_mbytes_per_sec": 0, 00:07:10.676 "r_mbytes_per_sec": 0, 00:07:10.676 "w_mbytes_per_sec": 0 00:07:10.676 }, 00:07:10.676 "claimed": true, 00:07:10.676 "claim_type": "exclusive_write", 00:07:10.676 "zoned": false, 00:07:10.676 "supported_io_types": { 00:07:10.676 "read": true, 00:07:10.676 "write": true, 00:07:10.676 "unmap": true, 00:07:10.676 "flush": true, 00:07:10.676 "reset": true, 00:07:10.676 "nvme_admin": false, 00:07:10.676 "nvme_io": false, 00:07:10.676 "nvme_io_md": false, 00:07:10.676 "write_zeroes": true, 00:07:10.676 "zcopy": true, 00:07:10.676 "get_zone_info": false, 00:07:10.676 "zone_management": false, 00:07:10.676 "zone_append": false, 00:07:10.676 "compare": false, 00:07:10.676 "compare_and_write": false, 00:07:10.676 "abort": true, 00:07:10.676 "seek_hole": false, 00:07:10.676 "seek_data": false, 00:07:10.676 "copy": true, 00:07:10.676 "nvme_iov_md": false 00:07:10.676 }, 00:07:10.676 "memory_domains": [ 00:07:10.676 { 00:07:10.676 "dma_device_id": "system", 00:07:10.676 "dma_device_type": 1 00:07:10.676 }, 00:07:10.676 { 00:07:10.676 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:10.676 "dma_device_type": 2 00:07:10.676 } 00:07:10.676 ], 00:07:10.676 "driver_specific": {} 00:07:10.676 } 00:07:10.676 ] 00:07:10.676 15:14:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:10.676 15:14:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:10.676 15:14:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:10.676 15:14:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:10.676 15:14:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:07:10.676 15:14:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:10.676 15:14:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:10.676 15:14:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:10.676 15:14:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:10.676 15:14:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:10.676 15:14:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:10.676 15:14:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:10.676 15:14:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:10.676 15:14:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:10.676 15:14:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:10.676 15:14:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:10.676 15:14:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.676 15:14:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:10.676 15:14:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:10.676 15:14:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:10.676 "name": "Existed_Raid", 00:07:10.676 "uuid": "1f907c2d-3ade-4580-92e5-c8943051cb70", 00:07:10.676 "strip_size_kb": 64, 00:07:10.676 "state": "online", 00:07:10.676 "raid_level": "raid0", 00:07:10.676 "superblock": false, 00:07:10.676 "num_base_bdevs": 2, 00:07:10.676 "num_base_bdevs_discovered": 2, 00:07:10.676 "num_base_bdevs_operational": 2, 00:07:10.676 "base_bdevs_list": [ 00:07:10.676 { 00:07:10.676 "name": "BaseBdev1", 00:07:10.676 "uuid": "3d2e5ede-acca-4e35-9320-185f79899cac", 00:07:10.676 "is_configured": true, 00:07:10.676 "data_offset": 0, 00:07:10.676 "data_size": 65536 00:07:10.676 }, 00:07:10.676 { 00:07:10.676 "name": "BaseBdev2", 00:07:10.676 "uuid": "4bd3cbfe-98f3-4241-8efb-97c6420d749b", 00:07:10.676 "is_configured": true, 00:07:10.676 "data_offset": 0, 00:07:10.676 "data_size": 65536 00:07:10.676 } 00:07:10.676 ] 00:07:10.676 }' 00:07:10.935 15:14:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:10.935 15:14:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:11.193 15:14:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:11.193 15:14:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:11.193 15:14:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:11.193 15:14:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:11.193 15:14:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:11.193 15:14:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:11.193 15:14:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:11.193 15:14:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:11.193 15:14:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:11.193 15:14:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:11.193 [2024-11-20 15:14:57.514246] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:11.193 15:14:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:11.193 15:14:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:11.193 "name": "Existed_Raid", 00:07:11.193 "aliases": [ 00:07:11.193 "1f907c2d-3ade-4580-92e5-c8943051cb70" 00:07:11.193 ], 00:07:11.193 "product_name": "Raid Volume", 00:07:11.193 "block_size": 512, 00:07:11.193 "num_blocks": 131072, 00:07:11.193 "uuid": "1f907c2d-3ade-4580-92e5-c8943051cb70", 00:07:11.193 "assigned_rate_limits": { 00:07:11.193 "rw_ios_per_sec": 0, 00:07:11.193 "rw_mbytes_per_sec": 0, 00:07:11.193 "r_mbytes_per_sec": 0, 00:07:11.193 "w_mbytes_per_sec": 0 00:07:11.193 }, 00:07:11.193 "claimed": false, 00:07:11.194 "zoned": false, 00:07:11.194 "supported_io_types": { 00:07:11.194 "read": true, 00:07:11.194 "write": true, 00:07:11.194 "unmap": true, 00:07:11.194 "flush": true, 00:07:11.194 "reset": true, 00:07:11.194 "nvme_admin": false, 00:07:11.194 "nvme_io": false, 00:07:11.194 "nvme_io_md": false, 00:07:11.194 "write_zeroes": true, 00:07:11.194 "zcopy": false, 00:07:11.194 "get_zone_info": false, 00:07:11.194 "zone_management": false, 00:07:11.194 "zone_append": false, 00:07:11.194 "compare": false, 00:07:11.194 "compare_and_write": false, 00:07:11.194 "abort": false, 00:07:11.194 "seek_hole": false, 00:07:11.194 "seek_data": false, 00:07:11.194 "copy": false, 00:07:11.194 "nvme_iov_md": false 00:07:11.194 }, 00:07:11.194 "memory_domains": [ 00:07:11.194 { 00:07:11.194 "dma_device_id": "system", 00:07:11.194 "dma_device_type": 1 00:07:11.194 }, 00:07:11.194 { 00:07:11.194 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:11.194 "dma_device_type": 2 00:07:11.194 }, 00:07:11.194 { 00:07:11.194 "dma_device_id": "system", 00:07:11.194 "dma_device_type": 1 00:07:11.194 }, 00:07:11.194 { 00:07:11.194 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:11.194 "dma_device_type": 2 00:07:11.194 } 00:07:11.194 ], 00:07:11.194 "driver_specific": { 00:07:11.194 "raid": { 00:07:11.194 "uuid": "1f907c2d-3ade-4580-92e5-c8943051cb70", 00:07:11.194 "strip_size_kb": 64, 00:07:11.194 "state": "online", 00:07:11.194 "raid_level": "raid0", 00:07:11.194 "superblock": false, 00:07:11.194 "num_base_bdevs": 2, 00:07:11.194 "num_base_bdevs_discovered": 2, 00:07:11.194 "num_base_bdevs_operational": 2, 00:07:11.194 "base_bdevs_list": [ 00:07:11.194 { 00:07:11.194 "name": "BaseBdev1", 00:07:11.194 "uuid": "3d2e5ede-acca-4e35-9320-185f79899cac", 00:07:11.194 "is_configured": true, 00:07:11.194 "data_offset": 0, 00:07:11.194 "data_size": 65536 00:07:11.194 }, 00:07:11.194 { 00:07:11.194 "name": "BaseBdev2", 00:07:11.194 "uuid": "4bd3cbfe-98f3-4241-8efb-97c6420d749b", 00:07:11.194 "is_configured": true, 00:07:11.194 "data_offset": 0, 00:07:11.194 "data_size": 65536 00:07:11.194 } 00:07:11.194 ] 00:07:11.194 } 00:07:11.194 } 00:07:11.194 }' 00:07:11.194 15:14:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:11.194 15:14:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:11.194 BaseBdev2' 00:07:11.194 15:14:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:11.194 15:14:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:11.194 15:14:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:11.194 15:14:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:11.194 15:14:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:11.194 15:14:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:11.194 15:14:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:11.194 15:14:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:11.194 15:14:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:11.194 15:14:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:11.194 15:14:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:11.194 15:14:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:11.194 15:14:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:11.194 15:14:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:11.194 15:14:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:11.194 15:14:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:11.453 15:14:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:11.453 15:14:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:11.453 15:14:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:11.453 15:14:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:11.453 15:14:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:11.453 [2024-11-20 15:14:57.685819] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:11.453 [2024-11-20 15:14:57.685862] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:11.453 [2024-11-20 15:14:57.685930] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:11.453 15:14:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:11.453 15:14:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:11.453 15:14:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:07:11.453 15:14:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:11.453 15:14:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:11.453 15:14:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:11.453 15:14:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:07:11.453 15:14:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:11.453 15:14:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:11.453 15:14:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:11.453 15:14:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:11.453 15:14:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:11.453 15:14:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:11.453 15:14:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:11.453 15:14:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:11.453 15:14:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:11.453 15:14:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:11.453 15:14:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:11.453 15:14:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:11.453 15:14:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:11.453 15:14:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:11.453 15:14:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:11.453 "name": "Existed_Raid", 00:07:11.453 "uuid": "1f907c2d-3ade-4580-92e5-c8943051cb70", 00:07:11.453 "strip_size_kb": 64, 00:07:11.453 "state": "offline", 00:07:11.453 "raid_level": "raid0", 00:07:11.453 "superblock": false, 00:07:11.453 "num_base_bdevs": 2, 00:07:11.453 "num_base_bdevs_discovered": 1, 00:07:11.453 "num_base_bdevs_operational": 1, 00:07:11.453 "base_bdevs_list": [ 00:07:11.453 { 00:07:11.453 "name": null, 00:07:11.453 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:11.453 "is_configured": false, 00:07:11.453 "data_offset": 0, 00:07:11.453 "data_size": 65536 00:07:11.453 }, 00:07:11.453 { 00:07:11.453 "name": "BaseBdev2", 00:07:11.453 "uuid": "4bd3cbfe-98f3-4241-8efb-97c6420d749b", 00:07:11.453 "is_configured": true, 00:07:11.453 "data_offset": 0, 00:07:11.453 "data_size": 65536 00:07:11.453 } 00:07:11.453 ] 00:07:11.453 }' 00:07:11.453 15:14:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:11.453 15:14:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.021 15:14:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:12.021 15:14:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:12.021 15:14:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:12.021 15:14:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.021 15:14:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.021 15:14:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:12.021 15:14:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.021 15:14:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:12.021 15:14:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:12.021 15:14:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:12.021 15:14:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.021 15:14:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.021 [2024-11-20 15:14:58.261938] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:12.021 [2024-11-20 15:14:58.262002] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:07:12.021 15:14:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.021 15:14:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:12.021 15:14:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:12.021 15:14:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:12.021 15:14:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:12.021 15:14:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.021 15:14:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.021 15:14:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.021 15:14:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:12.021 15:14:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:12.021 15:14:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:12.021 15:14:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 60579 00:07:12.021 15:14:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 60579 ']' 00:07:12.021 15:14:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 60579 00:07:12.021 15:14:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:07:12.021 15:14:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:12.021 15:14:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60579 00:07:12.021 15:14:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:12.021 killing process with pid 60579 00:07:12.021 15:14:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:12.021 15:14:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60579' 00:07:12.021 15:14:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 60579 00:07:12.021 15:14:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 60579 00:07:12.021 [2024-11-20 15:14:58.456972] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:12.021 [2024-11-20 15:14:58.475765] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:13.401 15:14:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:07:13.401 00:07:13.401 real 0m4.925s 00:07:13.401 user 0m6.963s 00:07:13.401 sys 0m0.833s 00:07:13.401 ************************************ 00:07:13.401 END TEST raid_state_function_test 00:07:13.401 ************************************ 00:07:13.401 15:14:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:13.401 15:14:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:13.401 15:14:59 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:07:13.401 15:14:59 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:13.401 15:14:59 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:13.401 15:14:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:13.401 ************************************ 00:07:13.401 START TEST raid_state_function_test_sb 00:07:13.401 ************************************ 00:07:13.401 15:14:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 2 true 00:07:13.401 15:14:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:07:13.401 15:14:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:13.401 15:14:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:07:13.401 15:14:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:13.401 15:14:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:13.401 15:14:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:13.401 15:14:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:13.401 15:14:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:13.401 15:14:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:13.401 15:14:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:13.401 15:14:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:13.401 15:14:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:13.401 15:14:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:13.401 15:14:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:13.401 15:14:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:13.401 15:14:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:13.401 15:14:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:13.401 15:14:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:13.401 15:14:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:07:13.401 15:14:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:13.401 15:14:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:13.401 15:14:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:07:13.401 15:14:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:07:13.401 15:14:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=60827 00:07:13.401 Process raid pid: 60827 00:07:13.401 15:14:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 60827' 00:07:13.401 15:14:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 60827 00:07:13.401 15:14:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 60827 ']' 00:07:13.401 15:14:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:13.401 15:14:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:13.401 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:13.401 15:14:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:13.401 15:14:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:13.401 15:14:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:13.401 15:14:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:13.659 [2024-11-20 15:14:59.891603] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:07:13.659 [2024-11-20 15:14:59.891769] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:13.659 [2024-11-20 15:15:00.074752] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.917 [2024-11-20 15:15:00.207739] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.175 [2024-11-20 15:15:00.425813] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:14.175 [2024-11-20 15:15:00.425863] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:14.433 15:15:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:14.433 15:15:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:07:14.433 15:15:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:14.433 15:15:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:14.433 15:15:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:14.433 [2024-11-20 15:15:00.792817] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:14.433 [2024-11-20 15:15:00.792873] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:14.433 [2024-11-20 15:15:00.792885] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:14.433 [2024-11-20 15:15:00.792898] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:14.433 15:15:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:14.433 15:15:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:14.433 15:15:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:14.433 15:15:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:14.433 15:15:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:14.433 15:15:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:14.433 15:15:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:14.433 15:15:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:14.433 15:15:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:14.433 15:15:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:14.433 15:15:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:14.433 15:15:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:14.433 15:15:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:14.433 15:15:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:14.433 15:15:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:14.433 15:15:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:14.433 15:15:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:14.433 "name": "Existed_Raid", 00:07:14.433 "uuid": "1c7b78f3-b37e-41b2-94c8-63ba24855347", 00:07:14.433 "strip_size_kb": 64, 00:07:14.433 "state": "configuring", 00:07:14.433 "raid_level": "raid0", 00:07:14.433 "superblock": true, 00:07:14.433 "num_base_bdevs": 2, 00:07:14.434 "num_base_bdevs_discovered": 0, 00:07:14.434 "num_base_bdevs_operational": 2, 00:07:14.434 "base_bdevs_list": [ 00:07:14.434 { 00:07:14.434 "name": "BaseBdev1", 00:07:14.434 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:14.434 "is_configured": false, 00:07:14.434 "data_offset": 0, 00:07:14.434 "data_size": 0 00:07:14.434 }, 00:07:14.434 { 00:07:14.434 "name": "BaseBdev2", 00:07:14.434 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:14.434 "is_configured": false, 00:07:14.434 "data_offset": 0, 00:07:14.434 "data_size": 0 00:07:14.434 } 00:07:14.434 ] 00:07:14.434 }' 00:07:14.434 15:15:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:14.434 15:15:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:15.000 15:15:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:15.000 15:15:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.000 15:15:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:15.000 [2024-11-20 15:15:01.216174] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:15.000 [2024-11-20 15:15:01.216217] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:15.000 15:15:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.000 15:15:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:15.000 15:15:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.000 15:15:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:15.000 [2024-11-20 15:15:01.224156] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:15.000 [2024-11-20 15:15:01.224206] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:15.000 [2024-11-20 15:15:01.224216] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:15.000 [2024-11-20 15:15:01.224233] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:15.000 15:15:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.000 15:15:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:15.000 15:15:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.000 15:15:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:15.000 [2024-11-20 15:15:01.272531] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:15.000 BaseBdev1 00:07:15.000 15:15:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.000 15:15:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:15.000 15:15:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:07:15.000 15:15:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:15.000 15:15:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:07:15.000 15:15:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:15.000 15:15:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:15.000 15:15:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:15.000 15:15:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.000 15:15:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:15.000 15:15:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.000 15:15:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:15.000 15:15:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.000 15:15:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:15.000 [ 00:07:15.000 { 00:07:15.000 "name": "BaseBdev1", 00:07:15.000 "aliases": [ 00:07:15.000 "f78ea97f-9b78-46cb-8caa-fd36e6354bb5" 00:07:15.000 ], 00:07:15.000 "product_name": "Malloc disk", 00:07:15.000 "block_size": 512, 00:07:15.000 "num_blocks": 65536, 00:07:15.000 "uuid": "f78ea97f-9b78-46cb-8caa-fd36e6354bb5", 00:07:15.000 "assigned_rate_limits": { 00:07:15.000 "rw_ios_per_sec": 0, 00:07:15.000 "rw_mbytes_per_sec": 0, 00:07:15.000 "r_mbytes_per_sec": 0, 00:07:15.000 "w_mbytes_per_sec": 0 00:07:15.000 }, 00:07:15.000 "claimed": true, 00:07:15.000 "claim_type": "exclusive_write", 00:07:15.000 "zoned": false, 00:07:15.000 "supported_io_types": { 00:07:15.000 "read": true, 00:07:15.000 "write": true, 00:07:15.000 "unmap": true, 00:07:15.000 "flush": true, 00:07:15.000 "reset": true, 00:07:15.000 "nvme_admin": false, 00:07:15.000 "nvme_io": false, 00:07:15.000 "nvme_io_md": false, 00:07:15.000 "write_zeroes": true, 00:07:15.000 "zcopy": true, 00:07:15.000 "get_zone_info": false, 00:07:15.000 "zone_management": false, 00:07:15.000 "zone_append": false, 00:07:15.000 "compare": false, 00:07:15.000 "compare_and_write": false, 00:07:15.000 "abort": true, 00:07:15.000 "seek_hole": false, 00:07:15.000 "seek_data": false, 00:07:15.000 "copy": true, 00:07:15.000 "nvme_iov_md": false 00:07:15.000 }, 00:07:15.000 "memory_domains": [ 00:07:15.000 { 00:07:15.000 "dma_device_id": "system", 00:07:15.000 "dma_device_type": 1 00:07:15.000 }, 00:07:15.000 { 00:07:15.000 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:15.000 "dma_device_type": 2 00:07:15.000 } 00:07:15.000 ], 00:07:15.000 "driver_specific": {} 00:07:15.000 } 00:07:15.000 ] 00:07:15.000 15:15:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.000 15:15:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:07:15.000 15:15:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:15.000 15:15:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:15.000 15:15:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:15.000 15:15:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:15.001 15:15:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:15.001 15:15:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:15.001 15:15:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:15.001 15:15:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:15.001 15:15:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:15.001 15:15:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:15.001 15:15:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:15.001 15:15:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.001 15:15:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:15.001 15:15:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:15.001 15:15:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.001 15:15:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:15.001 "name": "Existed_Raid", 00:07:15.001 "uuid": "a0d5d799-7f0e-435e-a945-c75398783e61", 00:07:15.001 "strip_size_kb": 64, 00:07:15.001 "state": "configuring", 00:07:15.001 "raid_level": "raid0", 00:07:15.001 "superblock": true, 00:07:15.001 "num_base_bdevs": 2, 00:07:15.001 "num_base_bdevs_discovered": 1, 00:07:15.001 "num_base_bdevs_operational": 2, 00:07:15.001 "base_bdevs_list": [ 00:07:15.001 { 00:07:15.001 "name": "BaseBdev1", 00:07:15.001 "uuid": "f78ea97f-9b78-46cb-8caa-fd36e6354bb5", 00:07:15.001 "is_configured": true, 00:07:15.001 "data_offset": 2048, 00:07:15.001 "data_size": 63488 00:07:15.001 }, 00:07:15.001 { 00:07:15.001 "name": "BaseBdev2", 00:07:15.001 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:15.001 "is_configured": false, 00:07:15.001 "data_offset": 0, 00:07:15.001 "data_size": 0 00:07:15.001 } 00:07:15.001 ] 00:07:15.001 }' 00:07:15.001 15:15:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:15.001 15:15:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:15.260 15:15:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:15.260 15:15:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.260 15:15:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:15.518 [2024-11-20 15:15:01.743936] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:15.518 [2024-11-20 15:15:01.743994] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:07:15.518 15:15:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.518 15:15:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:15.519 15:15:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.519 15:15:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:15.519 [2024-11-20 15:15:01.751974] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:15.519 [2024-11-20 15:15:01.754247] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:15.519 [2024-11-20 15:15:01.754296] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:15.519 15:15:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.519 15:15:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:15.519 15:15:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:15.519 15:15:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:15.519 15:15:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:15.519 15:15:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:15.519 15:15:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:15.519 15:15:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:15.519 15:15:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:15.519 15:15:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:15.519 15:15:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:15.519 15:15:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:15.519 15:15:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:15.519 15:15:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:15.519 15:15:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:15.519 15:15:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.519 15:15:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:15.519 15:15:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.519 15:15:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:15.519 "name": "Existed_Raid", 00:07:15.519 "uuid": "55c67c51-f3c5-49f1-91ab-97030cf00d71", 00:07:15.519 "strip_size_kb": 64, 00:07:15.519 "state": "configuring", 00:07:15.519 "raid_level": "raid0", 00:07:15.519 "superblock": true, 00:07:15.519 "num_base_bdevs": 2, 00:07:15.519 "num_base_bdevs_discovered": 1, 00:07:15.519 "num_base_bdevs_operational": 2, 00:07:15.519 "base_bdevs_list": [ 00:07:15.519 { 00:07:15.519 "name": "BaseBdev1", 00:07:15.519 "uuid": "f78ea97f-9b78-46cb-8caa-fd36e6354bb5", 00:07:15.519 "is_configured": true, 00:07:15.519 "data_offset": 2048, 00:07:15.519 "data_size": 63488 00:07:15.519 }, 00:07:15.519 { 00:07:15.519 "name": "BaseBdev2", 00:07:15.519 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:15.519 "is_configured": false, 00:07:15.519 "data_offset": 0, 00:07:15.519 "data_size": 0 00:07:15.519 } 00:07:15.519 ] 00:07:15.519 }' 00:07:15.519 15:15:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:15.519 15:15:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:15.778 15:15:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:15.778 15:15:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.778 15:15:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:15.778 [2024-11-20 15:15:02.213194] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:15.778 [2024-11-20 15:15:02.213478] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:15.778 [2024-11-20 15:15:02.213495] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:15.778 [2024-11-20 15:15:02.213837] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:15.778 [2024-11-20 15:15:02.214075] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:15.778 [2024-11-20 15:15:02.214094] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:07:15.778 [2024-11-20 15:15:02.214240] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:15.778 BaseBdev2 00:07:15.778 15:15:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.778 15:15:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:15.778 15:15:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:07:15.778 15:15:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:15.778 15:15:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:07:15.778 15:15:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:15.778 15:15:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:15.778 15:15:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:15.778 15:15:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.778 15:15:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:15.778 15:15:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.778 15:15:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:15.778 15:15:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.778 15:15:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:15.778 [ 00:07:15.778 { 00:07:15.778 "name": "BaseBdev2", 00:07:15.778 "aliases": [ 00:07:15.778 "438f27cd-3673-433b-9ae0-38a7c85b82ec" 00:07:15.778 ], 00:07:15.778 "product_name": "Malloc disk", 00:07:15.778 "block_size": 512, 00:07:15.778 "num_blocks": 65536, 00:07:15.778 "uuid": "438f27cd-3673-433b-9ae0-38a7c85b82ec", 00:07:15.778 "assigned_rate_limits": { 00:07:15.778 "rw_ios_per_sec": 0, 00:07:15.778 "rw_mbytes_per_sec": 0, 00:07:15.778 "r_mbytes_per_sec": 0, 00:07:15.778 "w_mbytes_per_sec": 0 00:07:15.778 }, 00:07:15.778 "claimed": true, 00:07:15.778 "claim_type": "exclusive_write", 00:07:15.778 "zoned": false, 00:07:15.778 "supported_io_types": { 00:07:15.778 "read": true, 00:07:15.778 "write": true, 00:07:15.778 "unmap": true, 00:07:15.778 "flush": true, 00:07:15.778 "reset": true, 00:07:15.778 "nvme_admin": false, 00:07:15.778 "nvme_io": false, 00:07:15.778 "nvme_io_md": false, 00:07:15.778 "write_zeroes": true, 00:07:15.778 "zcopy": true, 00:07:15.778 "get_zone_info": false, 00:07:15.778 "zone_management": false, 00:07:15.778 "zone_append": false, 00:07:15.778 "compare": false, 00:07:15.778 "compare_and_write": false, 00:07:15.778 "abort": true, 00:07:15.778 "seek_hole": false, 00:07:15.778 "seek_data": false, 00:07:15.778 "copy": true, 00:07:15.778 "nvme_iov_md": false 00:07:15.778 }, 00:07:15.778 "memory_domains": [ 00:07:15.778 { 00:07:15.778 "dma_device_id": "system", 00:07:15.778 "dma_device_type": 1 00:07:15.778 }, 00:07:15.778 { 00:07:15.778 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:15.778 "dma_device_type": 2 00:07:15.778 } 00:07:15.778 ], 00:07:15.778 "driver_specific": {} 00:07:15.778 } 00:07:15.778 ] 00:07:15.778 15:15:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.778 15:15:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:07:15.778 15:15:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:15.778 15:15:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:15.778 15:15:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:07:15.778 15:15:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:15.778 15:15:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:15.778 15:15:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:15.778 15:15:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:15.778 15:15:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:15.778 15:15:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:15.778 15:15:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:15.778 15:15:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:15.778 15:15:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:15.778 15:15:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:15.778 15:15:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:15.778 15:15:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.778 15:15:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:16.037 15:15:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.037 15:15:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:16.037 "name": "Existed_Raid", 00:07:16.037 "uuid": "55c67c51-f3c5-49f1-91ab-97030cf00d71", 00:07:16.037 "strip_size_kb": 64, 00:07:16.037 "state": "online", 00:07:16.037 "raid_level": "raid0", 00:07:16.037 "superblock": true, 00:07:16.037 "num_base_bdevs": 2, 00:07:16.037 "num_base_bdevs_discovered": 2, 00:07:16.037 "num_base_bdevs_operational": 2, 00:07:16.037 "base_bdevs_list": [ 00:07:16.037 { 00:07:16.037 "name": "BaseBdev1", 00:07:16.037 "uuid": "f78ea97f-9b78-46cb-8caa-fd36e6354bb5", 00:07:16.037 "is_configured": true, 00:07:16.037 "data_offset": 2048, 00:07:16.037 "data_size": 63488 00:07:16.037 }, 00:07:16.037 { 00:07:16.037 "name": "BaseBdev2", 00:07:16.037 "uuid": "438f27cd-3673-433b-9ae0-38a7c85b82ec", 00:07:16.037 "is_configured": true, 00:07:16.037 "data_offset": 2048, 00:07:16.037 "data_size": 63488 00:07:16.037 } 00:07:16.037 ] 00:07:16.037 }' 00:07:16.037 15:15:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:16.037 15:15:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:16.296 15:15:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:16.296 15:15:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:16.296 15:15:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:16.296 15:15:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:16.296 15:15:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:07:16.296 15:15:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:16.296 15:15:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:16.296 15:15:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:16.296 15:15:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.296 15:15:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:16.296 [2024-11-20 15:15:02.705080] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:16.296 15:15:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.296 15:15:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:16.296 "name": "Existed_Raid", 00:07:16.296 "aliases": [ 00:07:16.296 "55c67c51-f3c5-49f1-91ab-97030cf00d71" 00:07:16.296 ], 00:07:16.296 "product_name": "Raid Volume", 00:07:16.296 "block_size": 512, 00:07:16.296 "num_blocks": 126976, 00:07:16.296 "uuid": "55c67c51-f3c5-49f1-91ab-97030cf00d71", 00:07:16.296 "assigned_rate_limits": { 00:07:16.296 "rw_ios_per_sec": 0, 00:07:16.296 "rw_mbytes_per_sec": 0, 00:07:16.296 "r_mbytes_per_sec": 0, 00:07:16.296 "w_mbytes_per_sec": 0 00:07:16.296 }, 00:07:16.296 "claimed": false, 00:07:16.296 "zoned": false, 00:07:16.296 "supported_io_types": { 00:07:16.296 "read": true, 00:07:16.296 "write": true, 00:07:16.296 "unmap": true, 00:07:16.296 "flush": true, 00:07:16.296 "reset": true, 00:07:16.296 "nvme_admin": false, 00:07:16.296 "nvme_io": false, 00:07:16.296 "nvme_io_md": false, 00:07:16.296 "write_zeroes": true, 00:07:16.296 "zcopy": false, 00:07:16.296 "get_zone_info": false, 00:07:16.296 "zone_management": false, 00:07:16.296 "zone_append": false, 00:07:16.296 "compare": false, 00:07:16.296 "compare_and_write": false, 00:07:16.296 "abort": false, 00:07:16.296 "seek_hole": false, 00:07:16.296 "seek_data": false, 00:07:16.296 "copy": false, 00:07:16.296 "nvme_iov_md": false 00:07:16.296 }, 00:07:16.296 "memory_domains": [ 00:07:16.296 { 00:07:16.296 "dma_device_id": "system", 00:07:16.296 "dma_device_type": 1 00:07:16.296 }, 00:07:16.296 { 00:07:16.296 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:16.296 "dma_device_type": 2 00:07:16.296 }, 00:07:16.296 { 00:07:16.296 "dma_device_id": "system", 00:07:16.296 "dma_device_type": 1 00:07:16.296 }, 00:07:16.296 { 00:07:16.296 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:16.296 "dma_device_type": 2 00:07:16.296 } 00:07:16.296 ], 00:07:16.296 "driver_specific": { 00:07:16.296 "raid": { 00:07:16.296 "uuid": "55c67c51-f3c5-49f1-91ab-97030cf00d71", 00:07:16.296 "strip_size_kb": 64, 00:07:16.296 "state": "online", 00:07:16.296 "raid_level": "raid0", 00:07:16.296 "superblock": true, 00:07:16.296 "num_base_bdevs": 2, 00:07:16.296 "num_base_bdevs_discovered": 2, 00:07:16.296 "num_base_bdevs_operational": 2, 00:07:16.296 "base_bdevs_list": [ 00:07:16.296 { 00:07:16.296 "name": "BaseBdev1", 00:07:16.296 "uuid": "f78ea97f-9b78-46cb-8caa-fd36e6354bb5", 00:07:16.296 "is_configured": true, 00:07:16.296 "data_offset": 2048, 00:07:16.296 "data_size": 63488 00:07:16.296 }, 00:07:16.296 { 00:07:16.296 "name": "BaseBdev2", 00:07:16.296 "uuid": "438f27cd-3673-433b-9ae0-38a7c85b82ec", 00:07:16.296 "is_configured": true, 00:07:16.296 "data_offset": 2048, 00:07:16.296 "data_size": 63488 00:07:16.296 } 00:07:16.296 ] 00:07:16.296 } 00:07:16.296 } 00:07:16.296 }' 00:07:16.296 15:15:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:16.554 15:15:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:16.554 BaseBdev2' 00:07:16.554 15:15:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:16.554 15:15:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:16.554 15:15:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:16.554 15:15:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:16.555 15:15:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:16.555 15:15:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.555 15:15:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:16.555 15:15:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.555 15:15:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:16.555 15:15:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:16.555 15:15:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:16.555 15:15:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:16.555 15:15:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.555 15:15:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:16.555 15:15:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:16.555 15:15:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.555 15:15:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:16.555 15:15:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:16.555 15:15:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:16.555 15:15:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.555 15:15:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:16.555 [2024-11-20 15:15:02.932859] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:16.555 [2024-11-20 15:15:02.932906] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:16.555 [2024-11-20 15:15:02.932968] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:16.555 15:15:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.555 15:15:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:16.555 15:15:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:07:16.555 15:15:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:16.555 15:15:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:07:16.555 15:15:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:16.555 15:15:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:07:16.555 15:15:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:16.555 15:15:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:16.555 15:15:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:16.555 15:15:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:16.555 15:15:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:16.555 15:15:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:16.555 15:15:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:16.555 15:15:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:16.555 15:15:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:16.814 15:15:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:16.814 15:15:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:16.814 15:15:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.814 15:15:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:16.814 15:15:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.814 15:15:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:16.814 "name": "Existed_Raid", 00:07:16.814 "uuid": "55c67c51-f3c5-49f1-91ab-97030cf00d71", 00:07:16.814 "strip_size_kb": 64, 00:07:16.814 "state": "offline", 00:07:16.814 "raid_level": "raid0", 00:07:16.814 "superblock": true, 00:07:16.814 "num_base_bdevs": 2, 00:07:16.814 "num_base_bdevs_discovered": 1, 00:07:16.814 "num_base_bdevs_operational": 1, 00:07:16.814 "base_bdevs_list": [ 00:07:16.814 { 00:07:16.814 "name": null, 00:07:16.814 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:16.814 "is_configured": false, 00:07:16.814 "data_offset": 0, 00:07:16.814 "data_size": 63488 00:07:16.814 }, 00:07:16.814 { 00:07:16.814 "name": "BaseBdev2", 00:07:16.814 "uuid": "438f27cd-3673-433b-9ae0-38a7c85b82ec", 00:07:16.814 "is_configured": true, 00:07:16.814 "data_offset": 2048, 00:07:16.814 "data_size": 63488 00:07:16.814 } 00:07:16.814 ] 00:07:16.814 }' 00:07:16.814 15:15:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:16.814 15:15:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:17.072 15:15:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:17.072 15:15:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:17.072 15:15:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:17.072 15:15:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:17.072 15:15:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:17.072 15:15:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:17.072 15:15:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:17.072 15:15:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:17.072 15:15:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:17.072 15:15:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:17.072 15:15:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:17.072 15:15:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:17.072 [2024-11-20 15:15:03.517759] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:17.072 [2024-11-20 15:15:03.517818] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:07:17.331 15:15:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:17.331 15:15:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:17.331 15:15:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:17.331 15:15:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:17.331 15:15:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:17.331 15:15:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:17.331 15:15:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:17.331 15:15:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:17.331 15:15:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:17.331 15:15:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:17.331 15:15:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:17.331 15:15:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 60827 00:07:17.331 15:15:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 60827 ']' 00:07:17.331 15:15:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 60827 00:07:17.331 15:15:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:07:17.331 15:15:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:17.331 15:15:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60827 00:07:17.331 15:15:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:17.331 15:15:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:17.331 killing process with pid 60827 00:07:17.331 15:15:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60827' 00:07:17.331 15:15:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 60827 00:07:17.331 [2024-11-20 15:15:03.715704] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:17.331 15:15:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 60827 00:07:17.331 [2024-11-20 15:15:03.733371] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:18.707 15:15:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:07:18.707 00:07:18.707 real 0m5.115s 00:07:18.707 user 0m7.370s 00:07:18.707 sys 0m0.858s 00:07:18.707 15:15:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:18.707 15:15:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:18.707 ************************************ 00:07:18.707 END TEST raid_state_function_test_sb 00:07:18.707 ************************************ 00:07:18.707 15:15:04 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:07:18.707 15:15:04 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:07:18.707 15:15:04 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:18.707 15:15:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:18.707 ************************************ 00:07:18.707 START TEST raid_superblock_test 00:07:18.707 ************************************ 00:07:18.707 15:15:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 2 00:07:18.707 15:15:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:07:18.707 15:15:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:07:18.707 15:15:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:07:18.707 15:15:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:07:18.707 15:15:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:07:18.707 15:15:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:07:18.707 15:15:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:07:18.707 15:15:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:07:18.707 15:15:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:07:18.707 15:15:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:07:18.707 15:15:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:07:18.707 15:15:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:07:18.707 15:15:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:07:18.707 15:15:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:07:18.707 15:15:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:07:18.707 15:15:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:07:18.707 15:15:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=61079 00:07:18.707 15:15:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 61079 00:07:18.707 15:15:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 61079 ']' 00:07:18.707 15:15:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:18.707 15:15:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:18.707 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:18.707 15:15:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:18.707 15:15:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:18.707 15:15:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.707 15:15:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:07:18.707 [2024-11-20 15:15:05.061366] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:07:18.707 [2024-11-20 15:15:05.061501] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61079 ] 00:07:18.966 [2024-11-20 15:15:05.244297] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.966 [2024-11-20 15:15:05.359123] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.224 [2024-11-20 15:15:05.566688] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:19.224 [2024-11-20 15:15:05.566737] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:19.484 15:15:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:19.484 15:15:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:07:19.484 15:15:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:07:19.484 15:15:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:19.484 15:15:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:07:19.484 15:15:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:07:19.484 15:15:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:07:19.484 15:15:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:19.484 15:15:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:19.484 15:15:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:19.484 15:15:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:07:19.484 15:15:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:19.484 15:15:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.484 malloc1 00:07:19.484 15:15:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:19.484 15:15:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:19.484 15:15:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:19.484 15:15:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.484 [2024-11-20 15:15:05.939631] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:19.484 [2024-11-20 15:15:05.939716] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:19.484 [2024-11-20 15:15:05.939742] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:19.484 [2024-11-20 15:15:05.939756] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:19.484 [2024-11-20 15:15:05.942291] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:19.484 [2024-11-20 15:15:05.942332] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:19.484 pt1 00:07:19.484 15:15:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:19.484 15:15:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:19.484 15:15:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:19.484 15:15:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:07:19.484 15:15:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:07:19.484 15:15:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:07:19.484 15:15:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:19.484 15:15:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:19.484 15:15:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:19.484 15:15:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:07:19.484 15:15:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:19.484 15:15:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.744 malloc2 00:07:19.744 15:15:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:19.744 15:15:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:19.744 15:15:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:19.744 15:15:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.744 [2024-11-20 15:15:05.993462] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:19.744 [2024-11-20 15:15:05.993529] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:19.744 [2024-11-20 15:15:05.993561] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:07:19.744 [2024-11-20 15:15:05.993574] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:19.744 [2024-11-20 15:15:05.996085] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:19.744 [2024-11-20 15:15:05.996147] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:19.744 pt2 00:07:19.744 15:15:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:19.744 15:15:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:19.744 15:15:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:19.744 15:15:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:07:19.744 15:15:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:19.744 15:15:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.744 [2024-11-20 15:15:06.005518] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:19.744 [2024-11-20 15:15:06.007690] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:19.744 [2024-11-20 15:15:06.007855] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:19.744 [2024-11-20 15:15:06.007869] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:19.744 [2024-11-20 15:15:06.008155] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:19.744 [2024-11-20 15:15:06.008311] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:19.744 [2024-11-20 15:15:06.008337] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:07:19.744 [2024-11-20 15:15:06.008512] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:19.744 15:15:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:19.744 15:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:19.744 15:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:19.744 15:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:19.744 15:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:19.744 15:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:19.744 15:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:19.744 15:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:19.744 15:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:19.744 15:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:19.744 15:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:19.744 15:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:19.744 15:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:19.744 15:15:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:19.744 15:15:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.744 15:15:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:19.744 15:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:19.744 "name": "raid_bdev1", 00:07:19.744 "uuid": "5b0d128f-be89-43d4-a843-21acf4ad880c", 00:07:19.744 "strip_size_kb": 64, 00:07:19.744 "state": "online", 00:07:19.744 "raid_level": "raid0", 00:07:19.744 "superblock": true, 00:07:19.744 "num_base_bdevs": 2, 00:07:19.744 "num_base_bdevs_discovered": 2, 00:07:19.744 "num_base_bdevs_operational": 2, 00:07:19.744 "base_bdevs_list": [ 00:07:19.744 { 00:07:19.744 "name": "pt1", 00:07:19.744 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:19.744 "is_configured": true, 00:07:19.744 "data_offset": 2048, 00:07:19.744 "data_size": 63488 00:07:19.744 }, 00:07:19.744 { 00:07:19.744 "name": "pt2", 00:07:19.744 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:19.744 "is_configured": true, 00:07:19.744 "data_offset": 2048, 00:07:19.744 "data_size": 63488 00:07:19.744 } 00:07:19.744 ] 00:07:19.744 }' 00:07:19.744 15:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:19.744 15:15:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.033 15:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:07:20.033 15:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:20.033 15:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:20.033 15:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:20.033 15:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:20.033 15:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:20.033 15:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:20.033 15:15:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.033 15:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:20.033 15:15:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.033 [2024-11-20 15:15:06.453142] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:20.033 15:15:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.033 15:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:20.033 "name": "raid_bdev1", 00:07:20.033 "aliases": [ 00:07:20.033 "5b0d128f-be89-43d4-a843-21acf4ad880c" 00:07:20.033 ], 00:07:20.033 "product_name": "Raid Volume", 00:07:20.033 "block_size": 512, 00:07:20.033 "num_blocks": 126976, 00:07:20.033 "uuid": "5b0d128f-be89-43d4-a843-21acf4ad880c", 00:07:20.033 "assigned_rate_limits": { 00:07:20.033 "rw_ios_per_sec": 0, 00:07:20.033 "rw_mbytes_per_sec": 0, 00:07:20.033 "r_mbytes_per_sec": 0, 00:07:20.033 "w_mbytes_per_sec": 0 00:07:20.033 }, 00:07:20.033 "claimed": false, 00:07:20.033 "zoned": false, 00:07:20.033 "supported_io_types": { 00:07:20.033 "read": true, 00:07:20.033 "write": true, 00:07:20.033 "unmap": true, 00:07:20.033 "flush": true, 00:07:20.033 "reset": true, 00:07:20.033 "nvme_admin": false, 00:07:20.033 "nvme_io": false, 00:07:20.033 "nvme_io_md": false, 00:07:20.033 "write_zeroes": true, 00:07:20.033 "zcopy": false, 00:07:20.033 "get_zone_info": false, 00:07:20.033 "zone_management": false, 00:07:20.033 "zone_append": false, 00:07:20.033 "compare": false, 00:07:20.033 "compare_and_write": false, 00:07:20.033 "abort": false, 00:07:20.033 "seek_hole": false, 00:07:20.033 "seek_data": false, 00:07:20.033 "copy": false, 00:07:20.033 "nvme_iov_md": false 00:07:20.033 }, 00:07:20.033 "memory_domains": [ 00:07:20.033 { 00:07:20.033 "dma_device_id": "system", 00:07:20.033 "dma_device_type": 1 00:07:20.033 }, 00:07:20.033 { 00:07:20.033 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:20.033 "dma_device_type": 2 00:07:20.033 }, 00:07:20.033 { 00:07:20.033 "dma_device_id": "system", 00:07:20.033 "dma_device_type": 1 00:07:20.033 }, 00:07:20.033 { 00:07:20.033 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:20.033 "dma_device_type": 2 00:07:20.033 } 00:07:20.033 ], 00:07:20.033 "driver_specific": { 00:07:20.033 "raid": { 00:07:20.033 "uuid": "5b0d128f-be89-43d4-a843-21acf4ad880c", 00:07:20.033 "strip_size_kb": 64, 00:07:20.033 "state": "online", 00:07:20.033 "raid_level": "raid0", 00:07:20.033 "superblock": true, 00:07:20.033 "num_base_bdevs": 2, 00:07:20.033 "num_base_bdevs_discovered": 2, 00:07:20.033 "num_base_bdevs_operational": 2, 00:07:20.033 "base_bdevs_list": [ 00:07:20.033 { 00:07:20.033 "name": "pt1", 00:07:20.033 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:20.033 "is_configured": true, 00:07:20.033 "data_offset": 2048, 00:07:20.033 "data_size": 63488 00:07:20.033 }, 00:07:20.033 { 00:07:20.033 "name": "pt2", 00:07:20.033 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:20.033 "is_configured": true, 00:07:20.033 "data_offset": 2048, 00:07:20.033 "data_size": 63488 00:07:20.033 } 00:07:20.033 ] 00:07:20.033 } 00:07:20.033 } 00:07:20.033 }' 00:07:20.034 15:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:20.326 15:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:20.326 pt2' 00:07:20.326 15:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:20.326 15:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:20.326 15:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:20.326 15:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:20.326 15:15:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.326 15:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:20.326 15:15:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.326 15:15:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.326 15:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:20.326 15:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:20.326 15:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:20.326 15:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:20.326 15:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:20.326 15:15:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.327 15:15:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.327 15:15:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.327 15:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:20.327 15:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:20.327 15:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:20.327 15:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:07:20.327 15:15:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.327 15:15:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.327 [2024-11-20 15:15:06.684838] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:20.327 15:15:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.327 15:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=5b0d128f-be89-43d4-a843-21acf4ad880c 00:07:20.327 15:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 5b0d128f-be89-43d4-a843-21acf4ad880c ']' 00:07:20.327 15:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:20.327 15:15:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.327 15:15:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.327 [2024-11-20 15:15:06.728455] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:20.327 [2024-11-20 15:15:06.728489] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:20.327 [2024-11-20 15:15:06.728583] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:20.327 [2024-11-20 15:15:06.728633] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:20.327 [2024-11-20 15:15:06.728649] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:07:20.327 15:15:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.327 15:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:20.327 15:15:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.327 15:15:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.327 15:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:07:20.327 15:15:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.327 15:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:07:20.327 15:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:07:20.327 15:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:20.327 15:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:07:20.327 15:15:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.327 15:15:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.327 15:15:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.327 15:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:20.327 15:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:07:20.327 15:15:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.327 15:15:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.327 15:15:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.327 15:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:07:20.327 15:15:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.327 15:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:07:20.327 15:15:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.587 15:15:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.587 15:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:07:20.587 15:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:20.587 15:15:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:07:20.587 15:15:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:20.587 15:15:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:07:20.587 15:15:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:20.587 15:15:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:07:20.587 15:15:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:20.587 15:15:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:20.587 15:15:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.587 15:15:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.587 [2024-11-20 15:15:06.856314] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:07:20.587 [2024-11-20 15:15:06.858613] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:07:20.587 [2024-11-20 15:15:06.858702] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:07:20.587 [2024-11-20 15:15:06.858758] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:07:20.587 [2024-11-20 15:15:06.858777] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:20.587 [2024-11-20 15:15:06.858793] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:07:20.587 request: 00:07:20.587 { 00:07:20.587 "name": "raid_bdev1", 00:07:20.587 "raid_level": "raid0", 00:07:20.587 "base_bdevs": [ 00:07:20.587 "malloc1", 00:07:20.587 "malloc2" 00:07:20.587 ], 00:07:20.587 "strip_size_kb": 64, 00:07:20.587 "superblock": false, 00:07:20.587 "method": "bdev_raid_create", 00:07:20.587 "req_id": 1 00:07:20.587 } 00:07:20.587 Got JSON-RPC error response 00:07:20.587 response: 00:07:20.587 { 00:07:20.587 "code": -17, 00:07:20.587 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:07:20.587 } 00:07:20.587 15:15:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:07:20.587 15:15:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:07:20.587 15:15:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:20.587 15:15:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:20.587 15:15:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:20.587 15:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:20.587 15:15:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.587 15:15:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.587 15:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:07:20.587 15:15:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.587 15:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:07:20.587 15:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:07:20.587 15:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:20.587 15:15:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.587 15:15:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.587 [2024-11-20 15:15:06.920214] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:20.587 [2024-11-20 15:15:06.920292] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:20.587 [2024-11-20 15:15:06.920314] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:07:20.587 [2024-11-20 15:15:06.920329] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:20.587 [2024-11-20 15:15:06.922981] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:20.587 [2024-11-20 15:15:06.923025] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:20.587 [2024-11-20 15:15:06.923121] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:07:20.587 [2024-11-20 15:15:06.923179] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:20.587 pt1 00:07:20.587 15:15:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.587 15:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:07:20.587 15:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:20.587 15:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:20.587 15:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:20.587 15:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:20.587 15:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:20.587 15:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:20.587 15:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:20.587 15:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:20.587 15:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:20.587 15:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:20.587 15:15:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.587 15:15:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.587 15:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:20.587 15:15:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.587 15:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:20.587 "name": "raid_bdev1", 00:07:20.587 "uuid": "5b0d128f-be89-43d4-a843-21acf4ad880c", 00:07:20.587 "strip_size_kb": 64, 00:07:20.587 "state": "configuring", 00:07:20.587 "raid_level": "raid0", 00:07:20.587 "superblock": true, 00:07:20.587 "num_base_bdevs": 2, 00:07:20.587 "num_base_bdevs_discovered": 1, 00:07:20.587 "num_base_bdevs_operational": 2, 00:07:20.587 "base_bdevs_list": [ 00:07:20.587 { 00:07:20.587 "name": "pt1", 00:07:20.587 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:20.587 "is_configured": true, 00:07:20.587 "data_offset": 2048, 00:07:20.587 "data_size": 63488 00:07:20.587 }, 00:07:20.587 { 00:07:20.587 "name": null, 00:07:20.587 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:20.587 "is_configured": false, 00:07:20.587 "data_offset": 2048, 00:07:20.587 "data_size": 63488 00:07:20.587 } 00:07:20.587 ] 00:07:20.587 }' 00:07:20.587 15:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:20.587 15:15:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.155 15:15:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:07:21.155 15:15:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:07:21.155 15:15:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:21.155 15:15:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:21.155 15:15:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.155 15:15:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.155 [2024-11-20 15:15:07.395545] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:21.155 [2024-11-20 15:15:07.395622] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:21.155 [2024-11-20 15:15:07.395648] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:07:21.155 [2024-11-20 15:15:07.395676] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:21.155 [2024-11-20 15:15:07.396174] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:21.155 [2024-11-20 15:15:07.396205] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:21.155 [2024-11-20 15:15:07.396291] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:07:21.155 [2024-11-20 15:15:07.396321] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:21.155 [2024-11-20 15:15:07.396452] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:21.155 [2024-11-20 15:15:07.396467] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:21.155 [2024-11-20 15:15:07.396753] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:07:21.155 [2024-11-20 15:15:07.396904] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:21.155 [2024-11-20 15:15:07.396914] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:21.155 [2024-11-20 15:15:07.397063] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:21.155 pt2 00:07:21.155 15:15:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.155 15:15:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:07:21.155 15:15:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:21.155 15:15:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:21.155 15:15:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:21.155 15:15:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:21.155 15:15:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:21.156 15:15:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:21.156 15:15:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:21.156 15:15:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:21.156 15:15:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:21.156 15:15:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:21.156 15:15:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:21.156 15:15:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:21.156 15:15:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.156 15:15:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.156 15:15:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:21.156 15:15:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.156 15:15:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:21.156 "name": "raid_bdev1", 00:07:21.156 "uuid": "5b0d128f-be89-43d4-a843-21acf4ad880c", 00:07:21.156 "strip_size_kb": 64, 00:07:21.156 "state": "online", 00:07:21.156 "raid_level": "raid0", 00:07:21.156 "superblock": true, 00:07:21.156 "num_base_bdevs": 2, 00:07:21.156 "num_base_bdevs_discovered": 2, 00:07:21.156 "num_base_bdevs_operational": 2, 00:07:21.156 "base_bdevs_list": [ 00:07:21.156 { 00:07:21.156 "name": "pt1", 00:07:21.156 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:21.156 "is_configured": true, 00:07:21.156 "data_offset": 2048, 00:07:21.156 "data_size": 63488 00:07:21.156 }, 00:07:21.156 { 00:07:21.156 "name": "pt2", 00:07:21.156 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:21.156 "is_configured": true, 00:07:21.156 "data_offset": 2048, 00:07:21.156 "data_size": 63488 00:07:21.156 } 00:07:21.156 ] 00:07:21.156 }' 00:07:21.156 15:15:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:21.156 15:15:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.414 15:15:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:07:21.414 15:15:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:21.414 15:15:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:21.414 15:15:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:21.414 15:15:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:21.414 15:15:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:21.414 15:15:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:21.415 15:15:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:21.415 15:15:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.415 15:15:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.415 [2024-11-20 15:15:07.779898] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:21.415 15:15:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.415 15:15:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:21.415 "name": "raid_bdev1", 00:07:21.415 "aliases": [ 00:07:21.415 "5b0d128f-be89-43d4-a843-21acf4ad880c" 00:07:21.415 ], 00:07:21.415 "product_name": "Raid Volume", 00:07:21.415 "block_size": 512, 00:07:21.415 "num_blocks": 126976, 00:07:21.415 "uuid": "5b0d128f-be89-43d4-a843-21acf4ad880c", 00:07:21.415 "assigned_rate_limits": { 00:07:21.415 "rw_ios_per_sec": 0, 00:07:21.415 "rw_mbytes_per_sec": 0, 00:07:21.415 "r_mbytes_per_sec": 0, 00:07:21.415 "w_mbytes_per_sec": 0 00:07:21.415 }, 00:07:21.415 "claimed": false, 00:07:21.415 "zoned": false, 00:07:21.415 "supported_io_types": { 00:07:21.415 "read": true, 00:07:21.415 "write": true, 00:07:21.415 "unmap": true, 00:07:21.415 "flush": true, 00:07:21.415 "reset": true, 00:07:21.415 "nvme_admin": false, 00:07:21.415 "nvme_io": false, 00:07:21.415 "nvme_io_md": false, 00:07:21.415 "write_zeroes": true, 00:07:21.415 "zcopy": false, 00:07:21.415 "get_zone_info": false, 00:07:21.415 "zone_management": false, 00:07:21.415 "zone_append": false, 00:07:21.415 "compare": false, 00:07:21.415 "compare_and_write": false, 00:07:21.415 "abort": false, 00:07:21.415 "seek_hole": false, 00:07:21.415 "seek_data": false, 00:07:21.415 "copy": false, 00:07:21.415 "nvme_iov_md": false 00:07:21.415 }, 00:07:21.415 "memory_domains": [ 00:07:21.415 { 00:07:21.415 "dma_device_id": "system", 00:07:21.415 "dma_device_type": 1 00:07:21.415 }, 00:07:21.415 { 00:07:21.415 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:21.415 "dma_device_type": 2 00:07:21.415 }, 00:07:21.415 { 00:07:21.415 "dma_device_id": "system", 00:07:21.415 "dma_device_type": 1 00:07:21.415 }, 00:07:21.415 { 00:07:21.415 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:21.415 "dma_device_type": 2 00:07:21.415 } 00:07:21.415 ], 00:07:21.415 "driver_specific": { 00:07:21.415 "raid": { 00:07:21.415 "uuid": "5b0d128f-be89-43d4-a843-21acf4ad880c", 00:07:21.415 "strip_size_kb": 64, 00:07:21.415 "state": "online", 00:07:21.415 "raid_level": "raid0", 00:07:21.415 "superblock": true, 00:07:21.415 "num_base_bdevs": 2, 00:07:21.415 "num_base_bdevs_discovered": 2, 00:07:21.415 "num_base_bdevs_operational": 2, 00:07:21.415 "base_bdevs_list": [ 00:07:21.415 { 00:07:21.415 "name": "pt1", 00:07:21.415 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:21.415 "is_configured": true, 00:07:21.415 "data_offset": 2048, 00:07:21.415 "data_size": 63488 00:07:21.415 }, 00:07:21.415 { 00:07:21.415 "name": "pt2", 00:07:21.415 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:21.415 "is_configured": true, 00:07:21.415 "data_offset": 2048, 00:07:21.415 "data_size": 63488 00:07:21.415 } 00:07:21.415 ] 00:07:21.415 } 00:07:21.415 } 00:07:21.415 }' 00:07:21.415 15:15:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:21.415 15:15:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:21.415 pt2' 00:07:21.415 15:15:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:21.415 15:15:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:21.415 15:15:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:21.415 15:15:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:21.415 15:15:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:21.415 15:15:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.415 15:15:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.674 15:15:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.675 15:15:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:21.675 15:15:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:21.675 15:15:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:21.675 15:15:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:21.675 15:15:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:21.675 15:15:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.675 15:15:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.675 15:15:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.675 15:15:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:21.675 15:15:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:21.675 15:15:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:07:21.675 15:15:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:21.675 15:15:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.675 15:15:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.675 [2024-11-20 15:15:07.999871] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:21.675 15:15:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.675 15:15:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 5b0d128f-be89-43d4-a843-21acf4ad880c '!=' 5b0d128f-be89-43d4-a843-21acf4ad880c ']' 00:07:21.675 15:15:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:07:21.675 15:15:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:21.675 15:15:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:21.675 15:15:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 61079 00:07:21.675 15:15:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 61079 ']' 00:07:21.675 15:15:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 61079 00:07:21.675 15:15:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:07:21.675 15:15:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:21.675 15:15:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61079 00:07:21.675 15:15:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:21.675 15:15:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:21.675 killing process with pid 61079 00:07:21.675 15:15:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61079' 00:07:21.675 15:15:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 61079 00:07:21.675 15:15:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 61079 00:07:21.675 [2024-11-20 15:15:08.058589] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:21.675 [2024-11-20 15:15:08.058748] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:21.675 [2024-11-20 15:15:08.058820] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:21.675 [2024-11-20 15:15:08.058845] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:21.934 [2024-11-20 15:15:08.277071] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:23.319 15:15:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:07:23.319 00:07:23.319 real 0m4.525s 00:07:23.319 user 0m6.318s 00:07:23.319 sys 0m0.768s 00:07:23.319 15:15:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:23.319 15:15:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.319 ************************************ 00:07:23.319 END TEST raid_superblock_test 00:07:23.319 ************************************ 00:07:23.319 15:15:09 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 2 read 00:07:23.319 15:15:09 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:23.319 15:15:09 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:23.319 15:15:09 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:23.319 ************************************ 00:07:23.319 START TEST raid_read_error_test 00:07:23.319 ************************************ 00:07:23.319 15:15:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 2 read 00:07:23.319 15:15:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:07:23.319 15:15:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:23.319 15:15:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:07:23.319 15:15:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:23.319 15:15:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:23.319 15:15:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:23.319 15:15:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:23.319 15:15:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:23.319 15:15:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:23.319 15:15:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:23.319 15:15:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:23.319 15:15:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:23.319 15:15:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:23.319 15:15:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:23.319 15:15:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:23.319 15:15:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:23.319 15:15:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:23.319 15:15:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:23.319 15:15:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:07:23.319 15:15:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:23.319 15:15:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:23.319 15:15:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:23.319 15:15:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.6COheIwcig 00:07:23.319 15:15:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=61291 00:07:23.319 15:15:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 61291 00:07:23.319 15:15:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:23.319 15:15:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 61291 ']' 00:07:23.319 15:15:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:23.319 15:15:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:23.319 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:23.319 15:15:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:23.319 15:15:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:23.319 15:15:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.319 [2024-11-20 15:15:09.632393] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:07:23.319 [2024-11-20 15:15:09.632759] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61291 ] 00:07:23.577 [2024-11-20 15:15:09.802042] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.577 [2024-11-20 15:15:09.982268] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.834 [2024-11-20 15:15:10.232585] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:23.834 [2024-11-20 15:15:10.232647] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:24.093 15:15:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:24.093 15:15:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:07:24.093 15:15:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:24.093 15:15:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:24.093 15:15:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.093 15:15:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.352 BaseBdev1_malloc 00:07:24.353 15:15:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.353 15:15:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:24.353 15:15:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.353 15:15:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.353 true 00:07:24.353 15:15:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.353 15:15:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:24.353 15:15:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.353 15:15:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.353 [2024-11-20 15:15:10.602110] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:24.353 [2024-11-20 15:15:10.602170] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:24.353 [2024-11-20 15:15:10.602194] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:24.353 [2024-11-20 15:15:10.602208] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:24.353 [2024-11-20 15:15:10.604591] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:24.353 [2024-11-20 15:15:10.604642] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:24.353 BaseBdev1 00:07:24.353 15:15:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.353 15:15:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:24.353 15:15:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:24.353 15:15:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.353 15:15:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.353 BaseBdev2_malloc 00:07:24.353 15:15:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.353 15:15:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:24.353 15:15:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.353 15:15:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.353 true 00:07:24.353 15:15:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.353 15:15:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:24.353 15:15:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.353 15:15:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.353 [2024-11-20 15:15:10.671517] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:24.353 [2024-11-20 15:15:10.671573] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:24.353 [2024-11-20 15:15:10.671592] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:24.353 [2024-11-20 15:15:10.671606] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:24.353 [2024-11-20 15:15:10.673950] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:24.353 [2024-11-20 15:15:10.673992] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:24.353 BaseBdev2 00:07:24.353 15:15:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.353 15:15:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:24.353 15:15:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.353 15:15:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.353 [2024-11-20 15:15:10.683563] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:24.353 [2024-11-20 15:15:10.685637] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:24.353 [2024-11-20 15:15:10.685832] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:24.353 [2024-11-20 15:15:10.685852] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:24.353 [2024-11-20 15:15:10.686097] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:07:24.353 [2024-11-20 15:15:10.686260] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:24.353 [2024-11-20 15:15:10.686282] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:24.353 [2024-11-20 15:15:10.686427] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:24.353 15:15:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.353 15:15:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:24.353 15:15:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:24.353 15:15:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:24.353 15:15:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:24.353 15:15:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:24.353 15:15:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:24.353 15:15:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:24.353 15:15:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:24.353 15:15:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:24.353 15:15:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:24.353 15:15:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:24.353 15:15:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.353 15:15:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.353 15:15:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:24.353 15:15:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.353 15:15:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:24.353 "name": "raid_bdev1", 00:07:24.353 "uuid": "ebee997f-cd3d-4396-99d9-5bc414239891", 00:07:24.353 "strip_size_kb": 64, 00:07:24.353 "state": "online", 00:07:24.353 "raid_level": "raid0", 00:07:24.353 "superblock": true, 00:07:24.353 "num_base_bdevs": 2, 00:07:24.353 "num_base_bdevs_discovered": 2, 00:07:24.353 "num_base_bdevs_operational": 2, 00:07:24.353 "base_bdevs_list": [ 00:07:24.353 { 00:07:24.353 "name": "BaseBdev1", 00:07:24.353 "uuid": "30e6910a-b7d6-5b83-a772-1cd946968c95", 00:07:24.353 "is_configured": true, 00:07:24.353 "data_offset": 2048, 00:07:24.353 "data_size": 63488 00:07:24.353 }, 00:07:24.353 { 00:07:24.353 "name": "BaseBdev2", 00:07:24.353 "uuid": "151f84db-b447-5809-8697-1d78c915e95d", 00:07:24.353 "is_configured": true, 00:07:24.353 "data_offset": 2048, 00:07:24.353 "data_size": 63488 00:07:24.353 } 00:07:24.353 ] 00:07:24.353 }' 00:07:24.353 15:15:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:24.353 15:15:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.920 15:15:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:24.920 15:15:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:24.920 [2024-11-20 15:15:11.208903] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:07:25.856 15:15:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:07:25.856 15:15:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.856 15:15:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.856 15:15:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.856 15:15:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:25.856 15:15:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:07:25.856 15:15:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:25.856 15:15:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:25.856 15:15:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:25.856 15:15:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:25.856 15:15:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:25.856 15:15:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:25.856 15:15:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:25.856 15:15:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:25.856 15:15:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:25.856 15:15:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:25.856 15:15:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:25.856 15:15:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:25.856 15:15:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:25.856 15:15:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.856 15:15:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.856 15:15:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.856 15:15:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:25.856 "name": "raid_bdev1", 00:07:25.856 "uuid": "ebee997f-cd3d-4396-99d9-5bc414239891", 00:07:25.856 "strip_size_kb": 64, 00:07:25.856 "state": "online", 00:07:25.856 "raid_level": "raid0", 00:07:25.856 "superblock": true, 00:07:25.856 "num_base_bdevs": 2, 00:07:25.856 "num_base_bdevs_discovered": 2, 00:07:25.856 "num_base_bdevs_operational": 2, 00:07:25.856 "base_bdevs_list": [ 00:07:25.856 { 00:07:25.856 "name": "BaseBdev1", 00:07:25.856 "uuid": "30e6910a-b7d6-5b83-a772-1cd946968c95", 00:07:25.856 "is_configured": true, 00:07:25.856 "data_offset": 2048, 00:07:25.856 "data_size": 63488 00:07:25.856 }, 00:07:25.856 { 00:07:25.856 "name": "BaseBdev2", 00:07:25.856 "uuid": "151f84db-b447-5809-8697-1d78c915e95d", 00:07:25.856 "is_configured": true, 00:07:25.856 "data_offset": 2048, 00:07:25.856 "data_size": 63488 00:07:25.856 } 00:07:25.856 ] 00:07:25.856 }' 00:07:25.856 15:15:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:25.856 15:15:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.115 15:15:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:26.115 15:15:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.115 15:15:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.115 [2024-11-20 15:15:12.505237] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:26.115 [2024-11-20 15:15:12.505281] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:26.115 [2024-11-20 15:15:12.507957] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:26.115 [2024-11-20 15:15:12.508007] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:26.115 [2024-11-20 15:15:12.508041] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:26.115 [2024-11-20 15:15:12.508054] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:26.115 { 00:07:26.115 "results": [ 00:07:26.115 { 00:07:26.115 "job": "raid_bdev1", 00:07:26.115 "core_mask": "0x1", 00:07:26.115 "workload": "randrw", 00:07:26.115 "percentage": 50, 00:07:26.115 "status": "finished", 00:07:26.115 "queue_depth": 1, 00:07:26.115 "io_size": 131072, 00:07:26.115 "runtime": 1.29646, 00:07:26.115 "iops": 16056.029495703686, 00:07:26.115 "mibps": 2007.0036869629607, 00:07:26.115 "io_failed": 1, 00:07:26.115 "io_timeout": 0, 00:07:26.115 "avg_latency_us": 85.84353234622691, 00:07:26.115 "min_latency_us": 26.730923694779115, 00:07:26.115 "max_latency_us": 1644.9799196787149 00:07:26.115 } 00:07:26.115 ], 00:07:26.115 "core_count": 1 00:07:26.115 } 00:07:26.115 15:15:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.115 15:15:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 61291 00:07:26.115 15:15:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 61291 ']' 00:07:26.115 15:15:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 61291 00:07:26.115 15:15:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:07:26.115 15:15:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:26.115 15:15:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61291 00:07:26.115 15:15:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:26.115 15:15:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:26.115 killing process with pid 61291 00:07:26.115 15:15:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61291' 00:07:26.115 15:15:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 61291 00:07:26.115 [2024-11-20 15:15:12.548539] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:26.115 15:15:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 61291 00:07:26.374 [2024-11-20 15:15:12.682615] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:27.749 15:15:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:27.749 15:15:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.6COheIwcig 00:07:27.749 15:15:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:27.749 15:15:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.77 00:07:27.749 15:15:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:07:27.749 15:15:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:27.749 15:15:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:27.749 15:15:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.77 != \0\.\0\0 ]] 00:07:27.749 00:07:27.749 real 0m4.358s 00:07:27.749 user 0m5.184s 00:07:27.749 sys 0m0.595s 00:07:27.749 15:15:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:27.749 15:15:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.749 ************************************ 00:07:27.749 END TEST raid_read_error_test 00:07:27.749 ************************************ 00:07:27.749 15:15:13 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 2 write 00:07:27.750 15:15:13 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:27.750 15:15:13 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:27.750 15:15:13 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:27.750 ************************************ 00:07:27.750 START TEST raid_write_error_test 00:07:27.750 ************************************ 00:07:27.750 15:15:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 2 write 00:07:27.750 15:15:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:07:27.750 15:15:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:27.750 15:15:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:07:27.750 15:15:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:27.750 15:15:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:27.750 15:15:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:27.750 15:15:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:27.750 15:15:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:27.750 15:15:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:27.750 15:15:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:27.750 15:15:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:27.750 15:15:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:27.750 15:15:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:27.750 15:15:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:27.750 15:15:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:27.750 15:15:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:27.750 15:15:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:27.750 15:15:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:27.750 15:15:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:07:27.750 15:15:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:27.750 15:15:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:27.750 15:15:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:27.750 15:15:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.LHfrL3rNHb 00:07:27.750 15:15:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=61436 00:07:27.750 15:15:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 61436 00:07:27.750 15:15:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:27.750 15:15:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 61436 ']' 00:07:27.750 15:15:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:27.750 15:15:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:27.750 15:15:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:27.750 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:27.750 15:15:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:27.750 15:15:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.750 [2024-11-20 15:15:14.053155] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:07:27.750 [2024-11-20 15:15:14.053282] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61436 ] 00:07:27.750 [2024-11-20 15:15:14.221325] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.008 [2024-11-20 15:15:14.338736] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.265 [2024-11-20 15:15:14.557242] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:28.265 [2024-11-20 15:15:14.557315] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:28.523 15:15:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:28.523 15:15:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:07:28.523 15:15:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:28.523 15:15:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:28.523 15:15:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.523 15:15:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.523 BaseBdev1_malloc 00:07:28.523 15:15:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.523 15:15:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:28.523 15:15:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.523 15:15:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.523 true 00:07:28.523 15:15:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.523 15:15:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:28.523 15:15:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.523 15:15:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.523 [2024-11-20 15:15:14.971389] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:28.523 [2024-11-20 15:15:14.971588] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:28.523 [2024-11-20 15:15:14.971622] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:28.523 [2024-11-20 15:15:14.971638] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:28.523 [2024-11-20 15:15:14.974100] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:28.523 [2024-11-20 15:15:14.974146] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:28.523 BaseBdev1 00:07:28.523 15:15:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.523 15:15:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:28.523 15:15:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:28.523 15:15:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.523 15:15:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.781 BaseBdev2_malloc 00:07:28.781 15:15:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.781 15:15:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:28.781 15:15:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.781 15:15:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.781 true 00:07:28.781 15:15:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.781 15:15:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:28.781 15:15:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.781 15:15:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.781 [2024-11-20 15:15:15.040512] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:28.781 [2024-11-20 15:15:15.040694] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:28.781 [2024-11-20 15:15:15.040721] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:28.781 [2024-11-20 15:15:15.040735] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:28.781 [2024-11-20 15:15:15.043053] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:28.781 [2024-11-20 15:15:15.043096] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:28.781 BaseBdev2 00:07:28.781 15:15:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.781 15:15:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:28.781 15:15:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.781 15:15:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.781 [2024-11-20 15:15:15.052562] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:28.781 [2024-11-20 15:15:15.054732] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:28.781 [2024-11-20 15:15:15.054909] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:28.781 [2024-11-20 15:15:15.054929] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:28.781 [2024-11-20 15:15:15.055170] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:07:28.781 [2024-11-20 15:15:15.055320] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:28.781 [2024-11-20 15:15:15.055350] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:28.781 [2024-11-20 15:15:15.055508] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:28.781 15:15:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.781 15:15:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:28.781 15:15:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:28.781 15:15:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:28.781 15:15:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:28.781 15:15:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:28.781 15:15:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:28.781 15:15:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:28.782 15:15:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:28.782 15:15:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:28.782 15:15:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:28.782 15:15:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:28.782 15:15:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:28.782 15:15:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.782 15:15:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.782 15:15:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.782 15:15:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:28.782 "name": "raid_bdev1", 00:07:28.782 "uuid": "6f6f2e13-ec7b-4641-9b8d-00d24fb46ea6", 00:07:28.782 "strip_size_kb": 64, 00:07:28.782 "state": "online", 00:07:28.782 "raid_level": "raid0", 00:07:28.782 "superblock": true, 00:07:28.782 "num_base_bdevs": 2, 00:07:28.782 "num_base_bdevs_discovered": 2, 00:07:28.782 "num_base_bdevs_operational": 2, 00:07:28.782 "base_bdevs_list": [ 00:07:28.782 { 00:07:28.782 "name": "BaseBdev1", 00:07:28.782 "uuid": "ddfa98d1-f203-5f7c-bd62-051bb473a3df", 00:07:28.782 "is_configured": true, 00:07:28.782 "data_offset": 2048, 00:07:28.782 "data_size": 63488 00:07:28.782 }, 00:07:28.782 { 00:07:28.782 "name": "BaseBdev2", 00:07:28.782 "uuid": "ad1ab3aa-7073-5d89-b947-8a3e1050cf83", 00:07:28.782 "is_configured": true, 00:07:28.782 "data_offset": 2048, 00:07:28.782 "data_size": 63488 00:07:28.782 } 00:07:28.782 ] 00:07:28.782 }' 00:07:28.782 15:15:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:28.782 15:15:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.040 15:15:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:29.040 15:15:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:29.297 [2024-11-20 15:15:15.553245] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:07:30.247 15:15:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:07:30.247 15:15:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.247 15:15:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.247 15:15:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.247 15:15:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:30.247 15:15:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:07:30.247 15:15:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:30.247 15:15:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:30.247 15:15:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:30.247 15:15:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:30.247 15:15:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:30.247 15:15:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:30.247 15:15:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:30.247 15:15:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:30.247 15:15:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:30.247 15:15:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:30.247 15:15:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:30.247 15:15:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:30.247 15:15:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:30.247 15:15:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.247 15:15:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.247 15:15:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.247 15:15:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:30.247 "name": "raid_bdev1", 00:07:30.247 "uuid": "6f6f2e13-ec7b-4641-9b8d-00d24fb46ea6", 00:07:30.247 "strip_size_kb": 64, 00:07:30.247 "state": "online", 00:07:30.247 "raid_level": "raid0", 00:07:30.247 "superblock": true, 00:07:30.247 "num_base_bdevs": 2, 00:07:30.247 "num_base_bdevs_discovered": 2, 00:07:30.247 "num_base_bdevs_operational": 2, 00:07:30.247 "base_bdevs_list": [ 00:07:30.247 { 00:07:30.247 "name": "BaseBdev1", 00:07:30.247 "uuid": "ddfa98d1-f203-5f7c-bd62-051bb473a3df", 00:07:30.247 "is_configured": true, 00:07:30.247 "data_offset": 2048, 00:07:30.247 "data_size": 63488 00:07:30.247 }, 00:07:30.247 { 00:07:30.247 "name": "BaseBdev2", 00:07:30.247 "uuid": "ad1ab3aa-7073-5d89-b947-8a3e1050cf83", 00:07:30.247 "is_configured": true, 00:07:30.247 "data_offset": 2048, 00:07:30.247 "data_size": 63488 00:07:30.247 } 00:07:30.247 ] 00:07:30.247 }' 00:07:30.247 15:15:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:30.247 15:15:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.507 15:15:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:30.507 15:15:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.507 15:15:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.507 [2024-11-20 15:15:16.914242] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:30.507 [2024-11-20 15:15:16.914291] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:30.507 [2024-11-20 15:15:16.917469] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:30.507 [2024-11-20 15:15:16.917530] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:30.507 [2024-11-20 15:15:16.917572] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:30.507 [2024-11-20 15:15:16.917592] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:30.507 { 00:07:30.507 "results": [ 00:07:30.507 { 00:07:30.507 "job": "raid_bdev1", 00:07:30.507 "core_mask": "0x1", 00:07:30.507 "workload": "randrw", 00:07:30.507 "percentage": 50, 00:07:30.507 "status": "finished", 00:07:30.507 "queue_depth": 1, 00:07:30.507 "io_size": 131072, 00:07:30.507 "runtime": 1.361079, 00:07:30.507 "iops": 15846.251393196133, 00:07:30.507 "mibps": 1980.7814241495166, 00:07:30.507 "io_failed": 1, 00:07:30.507 "io_timeout": 0, 00:07:30.507 "avg_latency_us": 86.83087690369247, 00:07:30.507 "min_latency_us": 26.936546184738955, 00:07:30.507 "max_latency_us": 1401.5228915662651 00:07:30.507 } 00:07:30.507 ], 00:07:30.507 "core_count": 1 00:07:30.507 } 00:07:30.507 15:15:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.507 15:15:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 61436 00:07:30.507 15:15:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 61436 ']' 00:07:30.507 15:15:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 61436 00:07:30.507 15:15:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:07:30.507 15:15:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:30.507 15:15:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61436 00:07:30.507 killing process with pid 61436 00:07:30.507 15:15:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:30.507 15:15:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:30.507 15:15:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61436' 00:07:30.507 15:15:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 61436 00:07:30.507 [2024-11-20 15:15:16.960089] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:30.507 15:15:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 61436 00:07:30.767 [2024-11-20 15:15:17.096612] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:32.145 15:15:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.LHfrL3rNHb 00:07:32.145 15:15:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:32.145 15:15:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:32.145 15:15:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:07:32.145 15:15:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:07:32.145 15:15:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:32.145 15:15:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:32.145 15:15:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:07:32.145 00:07:32.145 real 0m4.360s 00:07:32.145 user 0m5.173s 00:07:32.145 sys 0m0.573s 00:07:32.145 15:15:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:32.145 ************************************ 00:07:32.145 END TEST raid_write_error_test 00:07:32.145 ************************************ 00:07:32.145 15:15:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.145 15:15:18 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:07:32.145 15:15:18 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:07:32.145 15:15:18 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:32.145 15:15:18 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:32.145 15:15:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:32.145 ************************************ 00:07:32.145 START TEST raid_state_function_test 00:07:32.145 ************************************ 00:07:32.145 15:15:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 2 false 00:07:32.145 15:15:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:07:32.145 15:15:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:32.145 15:15:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:07:32.145 15:15:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:32.145 15:15:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:32.145 15:15:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:32.145 15:15:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:32.145 15:15:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:32.145 15:15:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:32.145 15:15:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:32.145 15:15:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:32.145 15:15:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:32.145 15:15:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:32.145 15:15:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:32.145 15:15:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:32.145 15:15:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:32.145 15:15:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:32.145 15:15:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:32.146 15:15:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:07:32.146 15:15:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:32.146 15:15:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:32.146 15:15:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:07:32.146 15:15:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:07:32.146 15:15:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=61574 00:07:32.146 Process raid pid: 61574 00:07:32.146 15:15:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:32.146 15:15:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 61574' 00:07:32.146 15:15:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 61574 00:07:32.146 15:15:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 61574 ']' 00:07:32.146 15:15:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:32.146 15:15:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:32.146 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:32.146 15:15:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:32.146 15:15:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:32.146 15:15:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.146 [2024-11-20 15:15:18.477302] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:07:32.146 [2024-11-20 15:15:18.477635] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:32.405 [2024-11-20 15:15:18.661803] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.405 [2024-11-20 15:15:18.781232] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.664 [2024-11-20 15:15:19.001242] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:32.664 [2024-11-20 15:15:19.001296] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:32.924 15:15:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:32.924 15:15:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:07:32.924 15:15:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:32.924 15:15:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.924 15:15:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.924 [2024-11-20 15:15:19.374631] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:32.924 [2024-11-20 15:15:19.374872] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:32.924 [2024-11-20 15:15:19.374972] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:32.924 [2024-11-20 15:15:19.375021] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:32.924 15:15:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.924 15:15:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:32.924 15:15:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:32.924 15:15:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:32.924 15:15:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:32.924 15:15:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:32.924 15:15:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:32.924 15:15:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:32.924 15:15:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:32.924 15:15:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:32.924 15:15:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:32.924 15:15:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:32.924 15:15:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:32.924 15:15:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.924 15:15:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.185 15:15:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.185 15:15:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:33.185 "name": "Existed_Raid", 00:07:33.185 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:33.185 "strip_size_kb": 64, 00:07:33.185 "state": "configuring", 00:07:33.185 "raid_level": "concat", 00:07:33.185 "superblock": false, 00:07:33.185 "num_base_bdevs": 2, 00:07:33.185 "num_base_bdevs_discovered": 0, 00:07:33.185 "num_base_bdevs_operational": 2, 00:07:33.185 "base_bdevs_list": [ 00:07:33.185 { 00:07:33.185 "name": "BaseBdev1", 00:07:33.185 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:33.185 "is_configured": false, 00:07:33.185 "data_offset": 0, 00:07:33.185 "data_size": 0 00:07:33.185 }, 00:07:33.185 { 00:07:33.185 "name": "BaseBdev2", 00:07:33.185 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:33.185 "is_configured": false, 00:07:33.185 "data_offset": 0, 00:07:33.185 "data_size": 0 00:07:33.185 } 00:07:33.185 ] 00:07:33.185 }' 00:07:33.185 15:15:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:33.185 15:15:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.445 15:15:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:33.445 15:15:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.445 15:15:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.445 [2024-11-20 15:15:19.774007] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:33.445 [2024-11-20 15:15:19.774045] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:33.445 15:15:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.445 15:15:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:33.445 15:15:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.445 15:15:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.445 [2024-11-20 15:15:19.781995] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:33.445 [2024-11-20 15:15:19.782181] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:33.445 [2024-11-20 15:15:19.782204] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:33.445 [2024-11-20 15:15:19.782222] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:33.445 15:15:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.445 15:15:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:33.445 15:15:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.445 15:15:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.445 [2024-11-20 15:15:19.828857] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:33.445 BaseBdev1 00:07:33.445 15:15:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.445 15:15:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:33.445 15:15:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:07:33.445 15:15:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:33.445 15:15:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:33.445 15:15:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:33.445 15:15:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:33.446 15:15:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:33.446 15:15:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.446 15:15:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.446 15:15:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.446 15:15:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:33.446 15:15:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.446 15:15:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.446 [ 00:07:33.446 { 00:07:33.446 "name": "BaseBdev1", 00:07:33.446 "aliases": [ 00:07:33.446 "95cb345c-a011-487c-b837-3a36f7a8f9c7" 00:07:33.446 ], 00:07:33.446 "product_name": "Malloc disk", 00:07:33.446 "block_size": 512, 00:07:33.446 "num_blocks": 65536, 00:07:33.446 "uuid": "95cb345c-a011-487c-b837-3a36f7a8f9c7", 00:07:33.446 "assigned_rate_limits": { 00:07:33.446 "rw_ios_per_sec": 0, 00:07:33.446 "rw_mbytes_per_sec": 0, 00:07:33.446 "r_mbytes_per_sec": 0, 00:07:33.446 "w_mbytes_per_sec": 0 00:07:33.446 }, 00:07:33.446 "claimed": true, 00:07:33.446 "claim_type": "exclusive_write", 00:07:33.446 "zoned": false, 00:07:33.446 "supported_io_types": { 00:07:33.446 "read": true, 00:07:33.446 "write": true, 00:07:33.446 "unmap": true, 00:07:33.446 "flush": true, 00:07:33.446 "reset": true, 00:07:33.446 "nvme_admin": false, 00:07:33.446 "nvme_io": false, 00:07:33.446 "nvme_io_md": false, 00:07:33.446 "write_zeroes": true, 00:07:33.446 "zcopy": true, 00:07:33.446 "get_zone_info": false, 00:07:33.446 "zone_management": false, 00:07:33.446 "zone_append": false, 00:07:33.446 "compare": false, 00:07:33.446 "compare_and_write": false, 00:07:33.446 "abort": true, 00:07:33.446 "seek_hole": false, 00:07:33.446 "seek_data": false, 00:07:33.446 "copy": true, 00:07:33.446 "nvme_iov_md": false 00:07:33.446 }, 00:07:33.446 "memory_domains": [ 00:07:33.446 { 00:07:33.446 "dma_device_id": "system", 00:07:33.446 "dma_device_type": 1 00:07:33.446 }, 00:07:33.446 { 00:07:33.446 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:33.446 "dma_device_type": 2 00:07:33.446 } 00:07:33.446 ], 00:07:33.446 "driver_specific": {} 00:07:33.446 } 00:07:33.446 ] 00:07:33.446 15:15:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.446 15:15:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:33.446 15:15:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:33.446 15:15:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:33.446 15:15:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:33.446 15:15:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:33.446 15:15:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:33.446 15:15:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:33.446 15:15:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:33.446 15:15:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:33.446 15:15:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:33.446 15:15:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:33.446 15:15:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:33.446 15:15:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:33.446 15:15:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.446 15:15:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.446 15:15:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.446 15:15:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:33.446 "name": "Existed_Raid", 00:07:33.446 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:33.446 "strip_size_kb": 64, 00:07:33.446 "state": "configuring", 00:07:33.446 "raid_level": "concat", 00:07:33.446 "superblock": false, 00:07:33.446 "num_base_bdevs": 2, 00:07:33.446 "num_base_bdevs_discovered": 1, 00:07:33.446 "num_base_bdevs_operational": 2, 00:07:33.446 "base_bdevs_list": [ 00:07:33.446 { 00:07:33.446 "name": "BaseBdev1", 00:07:33.446 "uuid": "95cb345c-a011-487c-b837-3a36f7a8f9c7", 00:07:33.446 "is_configured": true, 00:07:33.446 "data_offset": 0, 00:07:33.446 "data_size": 65536 00:07:33.446 }, 00:07:33.446 { 00:07:33.446 "name": "BaseBdev2", 00:07:33.446 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:33.446 "is_configured": false, 00:07:33.446 "data_offset": 0, 00:07:33.446 "data_size": 0 00:07:33.446 } 00:07:33.446 ] 00:07:33.446 }' 00:07:33.446 15:15:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:33.446 15:15:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.014 15:15:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:34.014 15:15:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.014 15:15:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.014 [2024-11-20 15:15:20.280348] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:34.014 [2024-11-20 15:15:20.280407] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:07:34.014 15:15:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.014 15:15:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:34.014 15:15:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.014 15:15:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.014 [2024-11-20 15:15:20.288389] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:34.014 [2024-11-20 15:15:20.290829] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:34.014 [2024-11-20 15:15:20.291049] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:34.014 15:15:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.014 15:15:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:34.014 15:15:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:34.014 15:15:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:34.014 15:15:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:34.014 15:15:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:34.014 15:15:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:34.014 15:15:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:34.014 15:15:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:34.014 15:15:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:34.014 15:15:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:34.014 15:15:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:34.014 15:15:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:34.014 15:15:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:34.014 15:15:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:34.014 15:15:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.014 15:15:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.014 15:15:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.014 15:15:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:34.014 "name": "Existed_Raid", 00:07:34.014 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:34.014 "strip_size_kb": 64, 00:07:34.014 "state": "configuring", 00:07:34.014 "raid_level": "concat", 00:07:34.014 "superblock": false, 00:07:34.014 "num_base_bdevs": 2, 00:07:34.014 "num_base_bdevs_discovered": 1, 00:07:34.014 "num_base_bdevs_operational": 2, 00:07:34.014 "base_bdevs_list": [ 00:07:34.014 { 00:07:34.014 "name": "BaseBdev1", 00:07:34.014 "uuid": "95cb345c-a011-487c-b837-3a36f7a8f9c7", 00:07:34.014 "is_configured": true, 00:07:34.014 "data_offset": 0, 00:07:34.014 "data_size": 65536 00:07:34.014 }, 00:07:34.014 { 00:07:34.014 "name": "BaseBdev2", 00:07:34.014 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:34.014 "is_configured": false, 00:07:34.014 "data_offset": 0, 00:07:34.014 "data_size": 0 00:07:34.014 } 00:07:34.014 ] 00:07:34.014 }' 00:07:34.014 15:15:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:34.014 15:15:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.275 15:15:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:34.275 15:15:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.275 15:15:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.534 [2024-11-20 15:15:20.760091] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:34.534 [2024-11-20 15:15:20.760151] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:34.534 [2024-11-20 15:15:20.760162] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:34.534 [2024-11-20 15:15:20.760448] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:34.534 [2024-11-20 15:15:20.760624] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:34.534 [2024-11-20 15:15:20.760639] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:07:34.535 [2024-11-20 15:15:20.760969] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:34.535 BaseBdev2 00:07:34.535 15:15:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.535 15:15:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:34.535 15:15:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:07:34.535 15:15:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:34.535 15:15:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:34.535 15:15:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:34.535 15:15:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:34.535 15:15:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:34.535 15:15:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.535 15:15:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.535 15:15:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.535 15:15:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:34.535 15:15:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.535 15:15:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.535 [ 00:07:34.535 { 00:07:34.535 "name": "BaseBdev2", 00:07:34.535 "aliases": [ 00:07:34.535 "0d2b890d-20fc-490c-903b-e9f982670a34" 00:07:34.535 ], 00:07:34.535 "product_name": "Malloc disk", 00:07:34.535 "block_size": 512, 00:07:34.535 "num_blocks": 65536, 00:07:34.535 "uuid": "0d2b890d-20fc-490c-903b-e9f982670a34", 00:07:34.535 "assigned_rate_limits": { 00:07:34.535 "rw_ios_per_sec": 0, 00:07:34.535 "rw_mbytes_per_sec": 0, 00:07:34.535 "r_mbytes_per_sec": 0, 00:07:34.535 "w_mbytes_per_sec": 0 00:07:34.535 }, 00:07:34.535 "claimed": true, 00:07:34.535 "claim_type": "exclusive_write", 00:07:34.535 "zoned": false, 00:07:34.535 "supported_io_types": { 00:07:34.535 "read": true, 00:07:34.535 "write": true, 00:07:34.535 "unmap": true, 00:07:34.535 "flush": true, 00:07:34.535 "reset": true, 00:07:34.535 "nvme_admin": false, 00:07:34.535 "nvme_io": false, 00:07:34.535 "nvme_io_md": false, 00:07:34.535 "write_zeroes": true, 00:07:34.535 "zcopy": true, 00:07:34.535 "get_zone_info": false, 00:07:34.535 "zone_management": false, 00:07:34.535 "zone_append": false, 00:07:34.535 "compare": false, 00:07:34.535 "compare_and_write": false, 00:07:34.535 "abort": true, 00:07:34.535 "seek_hole": false, 00:07:34.535 "seek_data": false, 00:07:34.535 "copy": true, 00:07:34.535 "nvme_iov_md": false 00:07:34.535 }, 00:07:34.535 "memory_domains": [ 00:07:34.535 { 00:07:34.535 "dma_device_id": "system", 00:07:34.535 "dma_device_type": 1 00:07:34.535 }, 00:07:34.535 { 00:07:34.535 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:34.535 "dma_device_type": 2 00:07:34.535 } 00:07:34.535 ], 00:07:34.535 "driver_specific": {} 00:07:34.535 } 00:07:34.535 ] 00:07:34.535 15:15:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.535 15:15:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:34.535 15:15:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:34.535 15:15:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:34.535 15:15:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:07:34.535 15:15:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:34.535 15:15:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:34.535 15:15:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:34.535 15:15:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:34.535 15:15:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:34.535 15:15:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:34.535 15:15:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:34.535 15:15:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:34.535 15:15:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:34.535 15:15:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:34.535 15:15:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:34.535 15:15:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.535 15:15:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.535 15:15:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.535 15:15:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:34.535 "name": "Existed_Raid", 00:07:34.535 "uuid": "51b264e8-6534-496d-a0ac-91a634fd375f", 00:07:34.535 "strip_size_kb": 64, 00:07:34.535 "state": "online", 00:07:34.535 "raid_level": "concat", 00:07:34.535 "superblock": false, 00:07:34.535 "num_base_bdevs": 2, 00:07:34.535 "num_base_bdevs_discovered": 2, 00:07:34.535 "num_base_bdevs_operational": 2, 00:07:34.535 "base_bdevs_list": [ 00:07:34.535 { 00:07:34.535 "name": "BaseBdev1", 00:07:34.535 "uuid": "95cb345c-a011-487c-b837-3a36f7a8f9c7", 00:07:34.535 "is_configured": true, 00:07:34.535 "data_offset": 0, 00:07:34.535 "data_size": 65536 00:07:34.535 }, 00:07:34.535 { 00:07:34.535 "name": "BaseBdev2", 00:07:34.535 "uuid": "0d2b890d-20fc-490c-903b-e9f982670a34", 00:07:34.535 "is_configured": true, 00:07:34.535 "data_offset": 0, 00:07:34.535 "data_size": 65536 00:07:34.535 } 00:07:34.535 ] 00:07:34.535 }' 00:07:34.535 15:15:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:34.535 15:15:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.794 15:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:34.794 15:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:34.794 15:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:34.794 15:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:34.794 15:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:34.794 15:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:34.794 15:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:34.794 15:15:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.794 15:15:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.794 15:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:34.794 [2024-11-20 15:15:21.219794] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:34.794 15:15:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.794 15:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:34.794 "name": "Existed_Raid", 00:07:34.794 "aliases": [ 00:07:34.794 "51b264e8-6534-496d-a0ac-91a634fd375f" 00:07:34.794 ], 00:07:34.794 "product_name": "Raid Volume", 00:07:34.794 "block_size": 512, 00:07:34.794 "num_blocks": 131072, 00:07:34.794 "uuid": "51b264e8-6534-496d-a0ac-91a634fd375f", 00:07:34.794 "assigned_rate_limits": { 00:07:34.794 "rw_ios_per_sec": 0, 00:07:34.794 "rw_mbytes_per_sec": 0, 00:07:34.794 "r_mbytes_per_sec": 0, 00:07:34.794 "w_mbytes_per_sec": 0 00:07:34.794 }, 00:07:34.794 "claimed": false, 00:07:34.794 "zoned": false, 00:07:34.794 "supported_io_types": { 00:07:34.794 "read": true, 00:07:34.794 "write": true, 00:07:34.794 "unmap": true, 00:07:34.794 "flush": true, 00:07:34.794 "reset": true, 00:07:34.794 "nvme_admin": false, 00:07:34.794 "nvme_io": false, 00:07:34.794 "nvme_io_md": false, 00:07:34.794 "write_zeroes": true, 00:07:34.794 "zcopy": false, 00:07:34.794 "get_zone_info": false, 00:07:34.794 "zone_management": false, 00:07:34.794 "zone_append": false, 00:07:34.794 "compare": false, 00:07:34.794 "compare_and_write": false, 00:07:34.794 "abort": false, 00:07:34.794 "seek_hole": false, 00:07:34.794 "seek_data": false, 00:07:34.794 "copy": false, 00:07:34.794 "nvme_iov_md": false 00:07:34.794 }, 00:07:34.794 "memory_domains": [ 00:07:34.794 { 00:07:34.794 "dma_device_id": "system", 00:07:34.794 "dma_device_type": 1 00:07:34.794 }, 00:07:34.794 { 00:07:34.794 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:34.794 "dma_device_type": 2 00:07:34.794 }, 00:07:34.794 { 00:07:34.794 "dma_device_id": "system", 00:07:34.794 "dma_device_type": 1 00:07:34.794 }, 00:07:34.794 { 00:07:34.794 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:34.794 "dma_device_type": 2 00:07:34.794 } 00:07:34.794 ], 00:07:34.794 "driver_specific": { 00:07:34.794 "raid": { 00:07:34.794 "uuid": "51b264e8-6534-496d-a0ac-91a634fd375f", 00:07:34.794 "strip_size_kb": 64, 00:07:34.794 "state": "online", 00:07:34.794 "raid_level": "concat", 00:07:34.794 "superblock": false, 00:07:34.794 "num_base_bdevs": 2, 00:07:34.794 "num_base_bdevs_discovered": 2, 00:07:34.794 "num_base_bdevs_operational": 2, 00:07:34.794 "base_bdevs_list": [ 00:07:34.794 { 00:07:34.794 "name": "BaseBdev1", 00:07:34.794 "uuid": "95cb345c-a011-487c-b837-3a36f7a8f9c7", 00:07:34.794 "is_configured": true, 00:07:34.794 "data_offset": 0, 00:07:34.794 "data_size": 65536 00:07:34.794 }, 00:07:34.794 { 00:07:34.794 "name": "BaseBdev2", 00:07:34.794 "uuid": "0d2b890d-20fc-490c-903b-e9f982670a34", 00:07:34.794 "is_configured": true, 00:07:34.794 "data_offset": 0, 00:07:34.794 "data_size": 65536 00:07:34.794 } 00:07:34.794 ] 00:07:34.794 } 00:07:34.794 } 00:07:34.794 }' 00:07:34.794 15:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:35.052 15:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:35.052 BaseBdev2' 00:07:35.052 15:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:35.052 15:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:35.052 15:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:35.052 15:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:35.052 15:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:35.052 15:15:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.052 15:15:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.052 15:15:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.052 15:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:35.052 15:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:35.052 15:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:35.052 15:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:35.052 15:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:35.052 15:15:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.052 15:15:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.052 15:15:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.052 15:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:35.052 15:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:35.052 15:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:35.052 15:15:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.052 15:15:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.052 [2024-11-20 15:15:21.431554] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:35.052 [2024-11-20 15:15:21.431749] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:35.052 [2024-11-20 15:15:21.431832] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:35.052 15:15:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.052 15:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:35.052 15:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:07:35.052 15:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:35.052 15:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:35.052 15:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:35.053 15:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:07:35.053 15:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:35.053 15:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:35.053 15:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:35.053 15:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:35.053 15:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:35.053 15:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:35.053 15:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:35.053 15:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:35.053 15:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:35.312 15:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:35.312 15:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:35.312 15:15:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.312 15:15:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.312 15:15:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.312 15:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:35.312 "name": "Existed_Raid", 00:07:35.312 "uuid": "51b264e8-6534-496d-a0ac-91a634fd375f", 00:07:35.312 "strip_size_kb": 64, 00:07:35.312 "state": "offline", 00:07:35.312 "raid_level": "concat", 00:07:35.312 "superblock": false, 00:07:35.312 "num_base_bdevs": 2, 00:07:35.312 "num_base_bdevs_discovered": 1, 00:07:35.312 "num_base_bdevs_operational": 1, 00:07:35.312 "base_bdevs_list": [ 00:07:35.312 { 00:07:35.312 "name": null, 00:07:35.312 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:35.312 "is_configured": false, 00:07:35.312 "data_offset": 0, 00:07:35.312 "data_size": 65536 00:07:35.312 }, 00:07:35.312 { 00:07:35.312 "name": "BaseBdev2", 00:07:35.312 "uuid": "0d2b890d-20fc-490c-903b-e9f982670a34", 00:07:35.312 "is_configured": true, 00:07:35.312 "data_offset": 0, 00:07:35.312 "data_size": 65536 00:07:35.312 } 00:07:35.312 ] 00:07:35.312 }' 00:07:35.312 15:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:35.312 15:15:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.571 15:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:35.571 15:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:35.571 15:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:35.571 15:15:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.571 15:15:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.571 15:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:35.571 15:15:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.571 15:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:35.571 15:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:35.571 15:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:35.571 15:15:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.571 15:15:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.571 [2024-11-20 15:15:21.973862] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:35.571 [2024-11-20 15:15:21.974150] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:07:35.850 15:15:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.850 15:15:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:35.850 15:15:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:35.850 15:15:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:35.850 15:15:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.850 15:15:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.850 15:15:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:35.850 15:15:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.850 15:15:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:35.850 15:15:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:35.851 15:15:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:35.851 15:15:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 61574 00:07:35.851 15:15:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 61574 ']' 00:07:35.851 15:15:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 61574 00:07:35.851 15:15:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:07:35.851 15:15:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:35.851 15:15:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61574 00:07:35.851 killing process with pid 61574 00:07:35.851 15:15:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:35.851 15:15:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:35.851 15:15:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61574' 00:07:35.851 15:15:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 61574 00:07:35.851 [2024-11-20 15:15:22.162725] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:35.851 15:15:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 61574 00:07:35.851 [2024-11-20 15:15:22.181029] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:37.251 15:15:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:07:37.251 00:07:37.251 real 0m5.028s 00:07:37.251 user 0m7.161s 00:07:37.251 sys 0m0.842s 00:07:37.251 15:15:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:37.251 15:15:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.251 ************************************ 00:07:37.251 END TEST raid_state_function_test 00:07:37.251 ************************************ 00:07:37.251 15:15:23 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:07:37.251 15:15:23 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:37.251 15:15:23 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:37.251 15:15:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:37.251 ************************************ 00:07:37.251 START TEST raid_state_function_test_sb 00:07:37.251 ************************************ 00:07:37.251 15:15:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 2 true 00:07:37.251 15:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:07:37.251 15:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:37.251 15:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:07:37.251 15:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:37.251 15:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:37.251 15:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:37.251 15:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:37.251 15:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:37.251 15:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:37.251 15:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:37.251 15:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:37.251 15:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:37.251 15:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:37.251 15:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:37.251 15:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:37.251 15:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:37.251 15:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:37.251 15:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:37.251 15:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:07:37.251 15:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:37.251 15:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:37.251 15:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:07:37.251 15:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:07:37.251 Process raid pid: 61827 00:07:37.251 15:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=61827 00:07:37.251 15:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:37.251 15:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 61827' 00:07:37.251 15:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 61827 00:07:37.251 15:15:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 61827 ']' 00:07:37.251 15:15:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:37.251 15:15:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:37.251 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:37.251 15:15:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:37.251 15:15:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:37.251 15:15:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:37.251 [2024-11-20 15:15:23.587243] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:07:37.251 [2024-11-20 15:15:23.587402] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:37.510 [2024-11-20 15:15:23.775942] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.510 [2024-11-20 15:15:23.913007] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.769 [2024-11-20 15:15:24.147024] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:37.769 [2024-11-20 15:15:24.147294] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:38.336 15:15:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:38.336 15:15:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:07:38.336 15:15:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:38.336 15:15:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.336 15:15:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:38.336 [2024-11-20 15:15:24.527555] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:38.336 [2024-11-20 15:15:24.527615] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:38.336 [2024-11-20 15:15:24.527628] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:38.336 [2024-11-20 15:15:24.527642] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:38.336 15:15:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:38.336 15:15:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:38.336 15:15:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:38.336 15:15:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:38.336 15:15:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:38.336 15:15:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:38.336 15:15:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:38.336 15:15:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:38.336 15:15:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:38.336 15:15:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:38.336 15:15:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:38.336 15:15:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:38.336 15:15:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:38.336 15:15:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.336 15:15:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:38.336 15:15:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:38.336 15:15:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:38.336 "name": "Existed_Raid", 00:07:38.336 "uuid": "f6b65d36-bf97-432c-83af-8f9bc3fe1f5a", 00:07:38.336 "strip_size_kb": 64, 00:07:38.336 "state": "configuring", 00:07:38.336 "raid_level": "concat", 00:07:38.336 "superblock": true, 00:07:38.336 "num_base_bdevs": 2, 00:07:38.336 "num_base_bdevs_discovered": 0, 00:07:38.336 "num_base_bdevs_operational": 2, 00:07:38.336 "base_bdevs_list": [ 00:07:38.336 { 00:07:38.336 "name": "BaseBdev1", 00:07:38.336 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:38.336 "is_configured": false, 00:07:38.336 "data_offset": 0, 00:07:38.336 "data_size": 0 00:07:38.336 }, 00:07:38.336 { 00:07:38.336 "name": "BaseBdev2", 00:07:38.336 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:38.336 "is_configured": false, 00:07:38.336 "data_offset": 0, 00:07:38.336 "data_size": 0 00:07:38.336 } 00:07:38.336 ] 00:07:38.336 }' 00:07:38.336 15:15:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:38.336 15:15:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:38.595 15:15:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:38.595 15:15:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.595 15:15:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:38.595 [2024-11-20 15:15:24.939536] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:38.595 [2024-11-20 15:15:24.939731] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:38.595 15:15:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:38.595 15:15:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:38.595 15:15:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.595 15:15:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:38.595 [2024-11-20 15:15:24.951527] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:38.595 [2024-11-20 15:15:24.951709] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:38.595 [2024-11-20 15:15:24.951801] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:38.595 [2024-11-20 15:15:24.951852] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:38.595 15:15:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:38.595 15:15:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:38.595 15:15:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.595 15:15:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:38.595 BaseBdev1 00:07:38.595 [2024-11-20 15:15:25.003807] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:38.595 15:15:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:38.595 15:15:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:38.595 15:15:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:07:38.595 15:15:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:38.595 15:15:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:07:38.595 15:15:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:38.595 15:15:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:38.595 15:15:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:38.595 15:15:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.595 15:15:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:38.595 15:15:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:38.595 15:15:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:38.595 15:15:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.595 15:15:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:38.595 [ 00:07:38.595 { 00:07:38.595 "name": "BaseBdev1", 00:07:38.595 "aliases": [ 00:07:38.595 "a5b47796-d318-45d5-bd1b-411e8cc84f6c" 00:07:38.595 ], 00:07:38.595 "product_name": "Malloc disk", 00:07:38.595 "block_size": 512, 00:07:38.595 "num_blocks": 65536, 00:07:38.595 "uuid": "a5b47796-d318-45d5-bd1b-411e8cc84f6c", 00:07:38.595 "assigned_rate_limits": { 00:07:38.595 "rw_ios_per_sec": 0, 00:07:38.595 "rw_mbytes_per_sec": 0, 00:07:38.595 "r_mbytes_per_sec": 0, 00:07:38.595 "w_mbytes_per_sec": 0 00:07:38.595 }, 00:07:38.595 "claimed": true, 00:07:38.595 "claim_type": "exclusive_write", 00:07:38.595 "zoned": false, 00:07:38.595 "supported_io_types": { 00:07:38.595 "read": true, 00:07:38.595 "write": true, 00:07:38.595 "unmap": true, 00:07:38.595 "flush": true, 00:07:38.595 "reset": true, 00:07:38.595 "nvme_admin": false, 00:07:38.595 "nvme_io": false, 00:07:38.595 "nvme_io_md": false, 00:07:38.595 "write_zeroes": true, 00:07:38.595 "zcopy": true, 00:07:38.595 "get_zone_info": false, 00:07:38.595 "zone_management": false, 00:07:38.595 "zone_append": false, 00:07:38.596 "compare": false, 00:07:38.596 "compare_and_write": false, 00:07:38.596 "abort": true, 00:07:38.596 "seek_hole": false, 00:07:38.596 "seek_data": false, 00:07:38.596 "copy": true, 00:07:38.596 "nvme_iov_md": false 00:07:38.596 }, 00:07:38.596 "memory_domains": [ 00:07:38.596 { 00:07:38.596 "dma_device_id": "system", 00:07:38.596 "dma_device_type": 1 00:07:38.596 }, 00:07:38.596 { 00:07:38.596 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:38.596 "dma_device_type": 2 00:07:38.596 } 00:07:38.596 ], 00:07:38.596 "driver_specific": {} 00:07:38.596 } 00:07:38.596 ] 00:07:38.596 15:15:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:38.596 15:15:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:07:38.596 15:15:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:38.596 15:15:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:38.596 15:15:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:38.596 15:15:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:38.596 15:15:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:38.596 15:15:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:38.596 15:15:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:38.596 15:15:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:38.596 15:15:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:38.596 15:15:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:38.596 15:15:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:38.596 15:15:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:38.596 15:15:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.596 15:15:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:38.854 15:15:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:38.854 15:15:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:38.854 "name": "Existed_Raid", 00:07:38.854 "uuid": "8839752c-e1d3-4c30-81ee-60f6c9707e0b", 00:07:38.854 "strip_size_kb": 64, 00:07:38.854 "state": "configuring", 00:07:38.854 "raid_level": "concat", 00:07:38.854 "superblock": true, 00:07:38.854 "num_base_bdevs": 2, 00:07:38.854 "num_base_bdevs_discovered": 1, 00:07:38.854 "num_base_bdevs_operational": 2, 00:07:38.854 "base_bdevs_list": [ 00:07:38.854 { 00:07:38.854 "name": "BaseBdev1", 00:07:38.854 "uuid": "a5b47796-d318-45d5-bd1b-411e8cc84f6c", 00:07:38.854 "is_configured": true, 00:07:38.854 "data_offset": 2048, 00:07:38.854 "data_size": 63488 00:07:38.854 }, 00:07:38.854 { 00:07:38.854 "name": "BaseBdev2", 00:07:38.854 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:38.855 "is_configured": false, 00:07:38.855 "data_offset": 0, 00:07:38.855 "data_size": 0 00:07:38.855 } 00:07:38.855 ] 00:07:38.855 }' 00:07:38.855 15:15:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:38.855 15:15:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:39.113 15:15:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:39.113 15:15:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.113 15:15:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:39.113 [2024-11-20 15:15:25.471805] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:39.113 [2024-11-20 15:15:25.471861] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:07:39.113 15:15:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.113 15:15:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:39.114 15:15:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.114 15:15:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:39.114 [2024-11-20 15:15:25.483876] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:39.114 [2024-11-20 15:15:25.486254] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:39.114 [2024-11-20 15:15:25.486417] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:39.114 15:15:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.114 15:15:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:39.114 15:15:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:39.114 15:15:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:39.114 15:15:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:39.114 15:15:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:39.114 15:15:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:39.114 15:15:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:39.114 15:15:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:39.114 15:15:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:39.114 15:15:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:39.114 15:15:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:39.114 15:15:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:39.114 15:15:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:39.114 15:15:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.114 15:15:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:39.114 15:15:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:39.114 15:15:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.114 15:15:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:39.114 "name": "Existed_Raid", 00:07:39.114 "uuid": "21f720cd-ad97-41bf-9dba-fa166fdc6090", 00:07:39.114 "strip_size_kb": 64, 00:07:39.114 "state": "configuring", 00:07:39.114 "raid_level": "concat", 00:07:39.114 "superblock": true, 00:07:39.114 "num_base_bdevs": 2, 00:07:39.114 "num_base_bdevs_discovered": 1, 00:07:39.114 "num_base_bdevs_operational": 2, 00:07:39.114 "base_bdevs_list": [ 00:07:39.114 { 00:07:39.114 "name": "BaseBdev1", 00:07:39.114 "uuid": "a5b47796-d318-45d5-bd1b-411e8cc84f6c", 00:07:39.114 "is_configured": true, 00:07:39.114 "data_offset": 2048, 00:07:39.114 "data_size": 63488 00:07:39.114 }, 00:07:39.114 { 00:07:39.114 "name": "BaseBdev2", 00:07:39.114 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:39.114 "is_configured": false, 00:07:39.114 "data_offset": 0, 00:07:39.114 "data_size": 0 00:07:39.114 } 00:07:39.114 ] 00:07:39.114 }' 00:07:39.114 15:15:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:39.114 15:15:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:39.712 15:15:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:39.712 15:15:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.712 15:15:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:39.712 [2024-11-20 15:15:26.005326] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:39.712 BaseBdev2 00:07:39.712 [2024-11-20 15:15:26.005831] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:39.712 [2024-11-20 15:15:26.005854] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:39.712 [2024-11-20 15:15:26.006147] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:39.712 [2024-11-20 15:15:26.006309] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:39.712 [2024-11-20 15:15:26.006327] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:07:39.712 [2024-11-20 15:15:26.006479] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:39.712 15:15:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.712 15:15:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:39.712 15:15:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:07:39.712 15:15:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:39.712 15:15:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:07:39.712 15:15:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:39.712 15:15:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:39.712 15:15:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:39.713 15:15:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.713 15:15:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:39.713 15:15:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.713 15:15:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:39.713 15:15:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.713 15:15:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:39.713 [ 00:07:39.713 { 00:07:39.713 "name": "BaseBdev2", 00:07:39.713 "aliases": [ 00:07:39.713 "781ebadf-71d3-4418-9dce-b544be3178ee" 00:07:39.713 ], 00:07:39.713 "product_name": "Malloc disk", 00:07:39.713 "block_size": 512, 00:07:39.713 "num_blocks": 65536, 00:07:39.713 "uuid": "781ebadf-71d3-4418-9dce-b544be3178ee", 00:07:39.713 "assigned_rate_limits": { 00:07:39.713 "rw_ios_per_sec": 0, 00:07:39.713 "rw_mbytes_per_sec": 0, 00:07:39.713 "r_mbytes_per_sec": 0, 00:07:39.713 "w_mbytes_per_sec": 0 00:07:39.713 }, 00:07:39.713 "claimed": true, 00:07:39.713 "claim_type": "exclusive_write", 00:07:39.713 "zoned": false, 00:07:39.713 "supported_io_types": { 00:07:39.713 "read": true, 00:07:39.713 "write": true, 00:07:39.713 "unmap": true, 00:07:39.713 "flush": true, 00:07:39.713 "reset": true, 00:07:39.713 "nvme_admin": false, 00:07:39.713 "nvme_io": false, 00:07:39.713 "nvme_io_md": false, 00:07:39.713 "write_zeroes": true, 00:07:39.713 "zcopy": true, 00:07:39.713 "get_zone_info": false, 00:07:39.713 "zone_management": false, 00:07:39.713 "zone_append": false, 00:07:39.713 "compare": false, 00:07:39.713 "compare_and_write": false, 00:07:39.713 "abort": true, 00:07:39.713 "seek_hole": false, 00:07:39.713 "seek_data": false, 00:07:39.713 "copy": true, 00:07:39.713 "nvme_iov_md": false 00:07:39.713 }, 00:07:39.713 "memory_domains": [ 00:07:39.713 { 00:07:39.713 "dma_device_id": "system", 00:07:39.713 "dma_device_type": 1 00:07:39.713 }, 00:07:39.713 { 00:07:39.713 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:39.713 "dma_device_type": 2 00:07:39.713 } 00:07:39.713 ], 00:07:39.713 "driver_specific": {} 00:07:39.713 } 00:07:39.713 ] 00:07:39.713 15:15:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.713 15:15:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:07:39.713 15:15:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:39.713 15:15:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:39.713 15:15:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:07:39.713 15:15:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:39.713 15:15:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:39.713 15:15:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:39.713 15:15:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:39.713 15:15:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:39.713 15:15:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:39.713 15:15:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:39.713 15:15:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:39.713 15:15:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:39.713 15:15:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:39.713 15:15:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.713 15:15:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:39.713 15:15:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:39.713 15:15:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.713 15:15:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:39.713 "name": "Existed_Raid", 00:07:39.713 "uuid": "21f720cd-ad97-41bf-9dba-fa166fdc6090", 00:07:39.713 "strip_size_kb": 64, 00:07:39.713 "state": "online", 00:07:39.713 "raid_level": "concat", 00:07:39.713 "superblock": true, 00:07:39.713 "num_base_bdevs": 2, 00:07:39.713 "num_base_bdevs_discovered": 2, 00:07:39.713 "num_base_bdevs_operational": 2, 00:07:39.713 "base_bdevs_list": [ 00:07:39.713 { 00:07:39.713 "name": "BaseBdev1", 00:07:39.713 "uuid": "a5b47796-d318-45d5-bd1b-411e8cc84f6c", 00:07:39.713 "is_configured": true, 00:07:39.713 "data_offset": 2048, 00:07:39.713 "data_size": 63488 00:07:39.713 }, 00:07:39.713 { 00:07:39.713 "name": "BaseBdev2", 00:07:39.713 "uuid": "781ebadf-71d3-4418-9dce-b544be3178ee", 00:07:39.713 "is_configured": true, 00:07:39.713 "data_offset": 2048, 00:07:39.713 "data_size": 63488 00:07:39.713 } 00:07:39.713 ] 00:07:39.713 }' 00:07:39.713 15:15:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:39.713 15:15:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:40.278 15:15:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:40.278 15:15:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:40.278 15:15:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:40.278 15:15:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:40.278 15:15:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:07:40.278 15:15:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:40.278 15:15:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:40.278 15:15:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:40.278 15:15:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.278 15:15:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:40.278 [2024-11-20 15:15:26.497078] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:40.278 15:15:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.278 15:15:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:40.278 "name": "Existed_Raid", 00:07:40.278 "aliases": [ 00:07:40.278 "21f720cd-ad97-41bf-9dba-fa166fdc6090" 00:07:40.278 ], 00:07:40.278 "product_name": "Raid Volume", 00:07:40.278 "block_size": 512, 00:07:40.278 "num_blocks": 126976, 00:07:40.278 "uuid": "21f720cd-ad97-41bf-9dba-fa166fdc6090", 00:07:40.278 "assigned_rate_limits": { 00:07:40.278 "rw_ios_per_sec": 0, 00:07:40.278 "rw_mbytes_per_sec": 0, 00:07:40.278 "r_mbytes_per_sec": 0, 00:07:40.278 "w_mbytes_per_sec": 0 00:07:40.278 }, 00:07:40.278 "claimed": false, 00:07:40.278 "zoned": false, 00:07:40.278 "supported_io_types": { 00:07:40.278 "read": true, 00:07:40.278 "write": true, 00:07:40.278 "unmap": true, 00:07:40.278 "flush": true, 00:07:40.278 "reset": true, 00:07:40.278 "nvme_admin": false, 00:07:40.278 "nvme_io": false, 00:07:40.278 "nvme_io_md": false, 00:07:40.278 "write_zeroes": true, 00:07:40.278 "zcopy": false, 00:07:40.278 "get_zone_info": false, 00:07:40.278 "zone_management": false, 00:07:40.278 "zone_append": false, 00:07:40.278 "compare": false, 00:07:40.278 "compare_and_write": false, 00:07:40.278 "abort": false, 00:07:40.278 "seek_hole": false, 00:07:40.278 "seek_data": false, 00:07:40.278 "copy": false, 00:07:40.278 "nvme_iov_md": false 00:07:40.278 }, 00:07:40.278 "memory_domains": [ 00:07:40.278 { 00:07:40.278 "dma_device_id": "system", 00:07:40.278 "dma_device_type": 1 00:07:40.278 }, 00:07:40.278 { 00:07:40.278 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:40.278 "dma_device_type": 2 00:07:40.278 }, 00:07:40.278 { 00:07:40.278 "dma_device_id": "system", 00:07:40.278 "dma_device_type": 1 00:07:40.278 }, 00:07:40.278 { 00:07:40.278 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:40.278 "dma_device_type": 2 00:07:40.278 } 00:07:40.278 ], 00:07:40.278 "driver_specific": { 00:07:40.278 "raid": { 00:07:40.278 "uuid": "21f720cd-ad97-41bf-9dba-fa166fdc6090", 00:07:40.278 "strip_size_kb": 64, 00:07:40.278 "state": "online", 00:07:40.278 "raid_level": "concat", 00:07:40.278 "superblock": true, 00:07:40.278 "num_base_bdevs": 2, 00:07:40.278 "num_base_bdevs_discovered": 2, 00:07:40.278 "num_base_bdevs_operational": 2, 00:07:40.278 "base_bdevs_list": [ 00:07:40.278 { 00:07:40.278 "name": "BaseBdev1", 00:07:40.278 "uuid": "a5b47796-d318-45d5-bd1b-411e8cc84f6c", 00:07:40.278 "is_configured": true, 00:07:40.278 "data_offset": 2048, 00:07:40.278 "data_size": 63488 00:07:40.278 }, 00:07:40.278 { 00:07:40.278 "name": "BaseBdev2", 00:07:40.278 "uuid": "781ebadf-71d3-4418-9dce-b544be3178ee", 00:07:40.278 "is_configured": true, 00:07:40.278 "data_offset": 2048, 00:07:40.278 "data_size": 63488 00:07:40.278 } 00:07:40.278 ] 00:07:40.278 } 00:07:40.278 } 00:07:40.278 }' 00:07:40.278 15:15:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:40.278 15:15:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:40.278 BaseBdev2' 00:07:40.278 15:15:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:40.278 15:15:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:40.278 15:15:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:40.278 15:15:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:40.278 15:15:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:40.278 15:15:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.278 15:15:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:40.278 15:15:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.278 15:15:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:40.278 15:15:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:40.278 15:15:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:40.278 15:15:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:40.278 15:15:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:40.278 15:15:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.278 15:15:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:40.278 15:15:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.278 15:15:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:40.278 15:15:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:40.278 15:15:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:40.278 15:15:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.278 15:15:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:40.278 [2024-11-20 15:15:26.752838] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:40.278 [2024-11-20 15:15:26.752880] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:40.278 [2024-11-20 15:15:26.752936] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:40.584 15:15:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.584 15:15:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:40.584 15:15:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:07:40.584 15:15:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:40.584 15:15:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:07:40.584 15:15:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:40.584 15:15:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:07:40.584 15:15:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:40.584 15:15:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:40.584 15:15:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:40.584 15:15:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:40.584 15:15:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:40.584 15:15:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:40.584 15:15:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:40.584 15:15:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:40.584 15:15:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:40.584 15:15:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:40.584 15:15:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:40.584 15:15:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.584 15:15:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:40.584 15:15:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.584 15:15:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:40.584 "name": "Existed_Raid", 00:07:40.584 "uuid": "21f720cd-ad97-41bf-9dba-fa166fdc6090", 00:07:40.584 "strip_size_kb": 64, 00:07:40.584 "state": "offline", 00:07:40.584 "raid_level": "concat", 00:07:40.584 "superblock": true, 00:07:40.584 "num_base_bdevs": 2, 00:07:40.584 "num_base_bdevs_discovered": 1, 00:07:40.584 "num_base_bdevs_operational": 1, 00:07:40.584 "base_bdevs_list": [ 00:07:40.584 { 00:07:40.584 "name": null, 00:07:40.584 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:40.584 "is_configured": false, 00:07:40.584 "data_offset": 0, 00:07:40.584 "data_size": 63488 00:07:40.584 }, 00:07:40.584 { 00:07:40.584 "name": "BaseBdev2", 00:07:40.584 "uuid": "781ebadf-71d3-4418-9dce-b544be3178ee", 00:07:40.584 "is_configured": true, 00:07:40.584 "data_offset": 2048, 00:07:40.584 "data_size": 63488 00:07:40.584 } 00:07:40.584 ] 00:07:40.584 }' 00:07:40.584 15:15:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:40.584 15:15:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:40.842 15:15:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:40.842 15:15:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:40.842 15:15:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:40.842 15:15:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.842 15:15:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:40.842 15:15:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:40.842 15:15:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.842 15:15:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:40.842 15:15:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:40.842 15:15:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:40.842 15:15:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.842 15:15:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:41.099 [2024-11-20 15:15:27.325894] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:41.099 [2024-11-20 15:15:27.325957] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:07:41.099 15:15:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.099 15:15:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:41.099 15:15:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:41.099 15:15:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:41.099 15:15:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.099 15:15:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:41.099 15:15:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:41.099 15:15:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.099 15:15:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:41.099 15:15:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:41.099 15:15:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:41.099 15:15:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 61827 00:07:41.099 15:15:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 61827 ']' 00:07:41.099 15:15:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 61827 00:07:41.099 15:15:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:07:41.099 15:15:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:41.099 15:15:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61827 00:07:41.099 15:15:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:41.099 15:15:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:41.099 killing process with pid 61827 00:07:41.099 15:15:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61827' 00:07:41.099 15:15:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 61827 00:07:41.099 [2024-11-20 15:15:27.521300] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:41.099 15:15:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 61827 00:07:41.099 [2024-11-20 15:15:27.539196] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:42.475 15:15:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:07:42.475 00:07:42.475 real 0m5.269s 00:07:42.475 user 0m7.575s 00:07:42.475 sys 0m0.912s 00:07:42.475 15:15:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:42.475 ************************************ 00:07:42.475 END TEST raid_state_function_test_sb 00:07:42.475 ************************************ 00:07:42.475 15:15:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:42.475 15:15:28 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:07:42.475 15:15:28 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:07:42.475 15:15:28 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:42.475 15:15:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:42.475 ************************************ 00:07:42.475 START TEST raid_superblock_test 00:07:42.475 ************************************ 00:07:42.475 15:15:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 2 00:07:42.475 15:15:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:07:42.475 15:15:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:07:42.475 15:15:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:07:42.475 15:15:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:07:42.475 15:15:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:07:42.475 15:15:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:07:42.475 15:15:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:07:42.475 15:15:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:07:42.475 15:15:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:07:42.475 15:15:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:07:42.475 15:15:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:07:42.475 15:15:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:07:42.475 15:15:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:07:42.475 15:15:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:07:42.475 15:15:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:07:42.475 15:15:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:07:42.475 15:15:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=62084 00:07:42.475 15:15:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 62084 00:07:42.475 15:15:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 62084 ']' 00:07:42.475 15:15:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:42.475 15:15:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:42.475 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:42.475 15:15:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:42.475 15:15:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:42.475 15:15:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.475 15:15:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:07:42.475 [2024-11-20 15:15:28.909407] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:07:42.475 [2024-11-20 15:15:28.909541] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62084 ] 00:07:42.734 [2024-11-20 15:15:29.097577] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:42.991 [2024-11-20 15:15:29.228003] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.991 [2024-11-20 15:15:29.454665] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:42.991 [2024-11-20 15:15:29.454716] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:43.557 15:15:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:43.557 15:15:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:07:43.557 15:15:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:07:43.557 15:15:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:43.557 15:15:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:07:43.558 15:15:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:07:43.558 15:15:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:07:43.558 15:15:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:43.558 15:15:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:43.558 15:15:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:43.558 15:15:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:07:43.558 15:15:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.558 15:15:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.558 malloc1 00:07:43.558 15:15:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.558 15:15:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:43.558 15:15:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.558 15:15:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.558 [2024-11-20 15:15:29.829826] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:43.558 [2024-11-20 15:15:29.829889] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:43.558 [2024-11-20 15:15:29.829917] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:43.558 [2024-11-20 15:15:29.829931] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:43.558 [2024-11-20 15:15:29.832514] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:43.558 [2024-11-20 15:15:29.832561] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:43.558 pt1 00:07:43.558 15:15:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.558 15:15:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:43.558 15:15:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:43.558 15:15:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:07:43.558 15:15:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:07:43.558 15:15:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:07:43.558 15:15:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:43.558 15:15:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:43.558 15:15:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:43.558 15:15:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:07:43.558 15:15:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.558 15:15:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.558 malloc2 00:07:43.558 15:15:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.558 15:15:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:43.558 15:15:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.558 15:15:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.558 [2024-11-20 15:15:29.889411] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:43.558 [2024-11-20 15:15:29.889592] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:43.558 [2024-11-20 15:15:29.889671] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:07:43.558 [2024-11-20 15:15:29.889757] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:43.558 [2024-11-20 15:15:29.892326] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:43.558 [2024-11-20 15:15:29.892467] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:43.558 pt2 00:07:43.558 15:15:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.558 15:15:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:43.558 15:15:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:43.558 15:15:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:07:43.558 15:15:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.558 15:15:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.558 [2024-11-20 15:15:29.901456] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:43.558 [2024-11-20 15:15:29.903790] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:43.558 [2024-11-20 15:15:29.903993] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:43.558 [2024-11-20 15:15:29.904133] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:43.558 [2024-11-20 15:15:29.904457] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:43.558 [2024-11-20 15:15:29.904674] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:43.558 [2024-11-20 15:15:29.904723] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:07:43.558 [2024-11-20 15:15:29.904995] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:43.558 15:15:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.558 15:15:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:43.558 15:15:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:43.558 15:15:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:43.558 15:15:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:43.558 15:15:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:43.558 15:15:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:43.558 15:15:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:43.558 15:15:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:43.558 15:15:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:43.558 15:15:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:43.558 15:15:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:43.558 15:15:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.558 15:15:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.558 15:15:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:43.558 15:15:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.558 15:15:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:43.558 "name": "raid_bdev1", 00:07:43.558 "uuid": "ad72266c-27ef-4b5d-a3e0-cefdeb890e8f", 00:07:43.558 "strip_size_kb": 64, 00:07:43.558 "state": "online", 00:07:43.558 "raid_level": "concat", 00:07:43.558 "superblock": true, 00:07:43.558 "num_base_bdevs": 2, 00:07:43.558 "num_base_bdevs_discovered": 2, 00:07:43.558 "num_base_bdevs_operational": 2, 00:07:43.558 "base_bdevs_list": [ 00:07:43.558 { 00:07:43.558 "name": "pt1", 00:07:43.558 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:43.558 "is_configured": true, 00:07:43.558 "data_offset": 2048, 00:07:43.558 "data_size": 63488 00:07:43.558 }, 00:07:43.558 { 00:07:43.559 "name": "pt2", 00:07:43.559 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:43.559 "is_configured": true, 00:07:43.559 "data_offset": 2048, 00:07:43.559 "data_size": 63488 00:07:43.559 } 00:07:43.559 ] 00:07:43.559 }' 00:07:43.559 15:15:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:43.559 15:15:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.126 15:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:07:44.126 15:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:44.126 15:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:44.126 15:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:44.126 15:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:44.126 15:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:44.126 15:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:44.126 15:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:44.126 15:15:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.126 15:15:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.126 [2024-11-20 15:15:30.365061] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:44.126 15:15:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.126 15:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:44.126 "name": "raid_bdev1", 00:07:44.126 "aliases": [ 00:07:44.126 "ad72266c-27ef-4b5d-a3e0-cefdeb890e8f" 00:07:44.126 ], 00:07:44.126 "product_name": "Raid Volume", 00:07:44.126 "block_size": 512, 00:07:44.126 "num_blocks": 126976, 00:07:44.126 "uuid": "ad72266c-27ef-4b5d-a3e0-cefdeb890e8f", 00:07:44.126 "assigned_rate_limits": { 00:07:44.126 "rw_ios_per_sec": 0, 00:07:44.126 "rw_mbytes_per_sec": 0, 00:07:44.126 "r_mbytes_per_sec": 0, 00:07:44.126 "w_mbytes_per_sec": 0 00:07:44.126 }, 00:07:44.126 "claimed": false, 00:07:44.126 "zoned": false, 00:07:44.126 "supported_io_types": { 00:07:44.126 "read": true, 00:07:44.126 "write": true, 00:07:44.126 "unmap": true, 00:07:44.126 "flush": true, 00:07:44.126 "reset": true, 00:07:44.126 "nvme_admin": false, 00:07:44.126 "nvme_io": false, 00:07:44.126 "nvme_io_md": false, 00:07:44.126 "write_zeroes": true, 00:07:44.126 "zcopy": false, 00:07:44.126 "get_zone_info": false, 00:07:44.126 "zone_management": false, 00:07:44.126 "zone_append": false, 00:07:44.126 "compare": false, 00:07:44.126 "compare_and_write": false, 00:07:44.126 "abort": false, 00:07:44.126 "seek_hole": false, 00:07:44.126 "seek_data": false, 00:07:44.126 "copy": false, 00:07:44.126 "nvme_iov_md": false 00:07:44.126 }, 00:07:44.126 "memory_domains": [ 00:07:44.126 { 00:07:44.126 "dma_device_id": "system", 00:07:44.126 "dma_device_type": 1 00:07:44.126 }, 00:07:44.126 { 00:07:44.126 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:44.126 "dma_device_type": 2 00:07:44.126 }, 00:07:44.126 { 00:07:44.126 "dma_device_id": "system", 00:07:44.126 "dma_device_type": 1 00:07:44.126 }, 00:07:44.126 { 00:07:44.126 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:44.126 "dma_device_type": 2 00:07:44.126 } 00:07:44.126 ], 00:07:44.126 "driver_specific": { 00:07:44.126 "raid": { 00:07:44.126 "uuid": "ad72266c-27ef-4b5d-a3e0-cefdeb890e8f", 00:07:44.126 "strip_size_kb": 64, 00:07:44.126 "state": "online", 00:07:44.126 "raid_level": "concat", 00:07:44.126 "superblock": true, 00:07:44.126 "num_base_bdevs": 2, 00:07:44.126 "num_base_bdevs_discovered": 2, 00:07:44.126 "num_base_bdevs_operational": 2, 00:07:44.126 "base_bdevs_list": [ 00:07:44.126 { 00:07:44.126 "name": "pt1", 00:07:44.126 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:44.126 "is_configured": true, 00:07:44.126 "data_offset": 2048, 00:07:44.126 "data_size": 63488 00:07:44.126 }, 00:07:44.126 { 00:07:44.127 "name": "pt2", 00:07:44.127 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:44.127 "is_configured": true, 00:07:44.127 "data_offset": 2048, 00:07:44.127 "data_size": 63488 00:07:44.127 } 00:07:44.127 ] 00:07:44.127 } 00:07:44.127 } 00:07:44.127 }' 00:07:44.127 15:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:44.127 15:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:44.127 pt2' 00:07:44.127 15:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:44.127 15:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:44.127 15:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:44.127 15:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:44.127 15:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:44.127 15:15:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.127 15:15:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.127 15:15:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.127 15:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:44.127 15:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:44.127 15:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:44.127 15:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:44.127 15:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:44.127 15:15:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.127 15:15:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.127 15:15:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.127 15:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:44.127 15:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:44.127 15:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:44.127 15:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:07:44.127 15:15:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.127 15:15:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.127 [2024-11-20 15:15:30.604744] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:44.385 15:15:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.385 15:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=ad72266c-27ef-4b5d-a3e0-cefdeb890e8f 00:07:44.385 15:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z ad72266c-27ef-4b5d-a3e0-cefdeb890e8f ']' 00:07:44.385 15:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:44.385 15:15:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.385 15:15:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.385 [2024-11-20 15:15:30.648356] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:44.385 [2024-11-20 15:15:30.648384] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:44.385 [2024-11-20 15:15:30.648471] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:44.385 [2024-11-20 15:15:30.648524] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:44.385 [2024-11-20 15:15:30.648539] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:07:44.385 15:15:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.385 15:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:44.385 15:15:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.385 15:15:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.385 15:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:07:44.385 15:15:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.385 15:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:07:44.385 15:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:07:44.386 15:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:44.386 15:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:07:44.386 15:15:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.386 15:15:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.386 15:15:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.386 15:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:44.386 15:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:07:44.386 15:15:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.386 15:15:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.386 15:15:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.386 15:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:07:44.386 15:15:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.386 15:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:07:44.386 15:15:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.386 15:15:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.386 15:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:07:44.386 15:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:44.386 15:15:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:07:44.386 15:15:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:44.386 15:15:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:07:44.386 15:15:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:44.386 15:15:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:07:44.386 15:15:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:44.386 15:15:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:44.386 15:15:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.386 15:15:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.386 [2024-11-20 15:15:30.776237] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:07:44.386 [2024-11-20 15:15:30.778567] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:07:44.386 [2024-11-20 15:15:30.778638] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:07:44.386 [2024-11-20 15:15:30.778716] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:07:44.386 [2024-11-20 15:15:30.778736] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:44.386 [2024-11-20 15:15:30.778749] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:07:44.386 request: 00:07:44.386 { 00:07:44.386 "name": "raid_bdev1", 00:07:44.386 "raid_level": "concat", 00:07:44.386 "base_bdevs": [ 00:07:44.386 "malloc1", 00:07:44.386 "malloc2" 00:07:44.386 ], 00:07:44.386 "strip_size_kb": 64, 00:07:44.386 "superblock": false, 00:07:44.386 "method": "bdev_raid_create", 00:07:44.386 "req_id": 1 00:07:44.386 } 00:07:44.386 Got JSON-RPC error response 00:07:44.386 response: 00:07:44.386 { 00:07:44.386 "code": -17, 00:07:44.386 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:07:44.386 } 00:07:44.386 15:15:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:07:44.386 15:15:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:07:44.386 15:15:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:44.386 15:15:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:44.386 15:15:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:44.386 15:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:44.386 15:15:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.386 15:15:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.386 15:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:07:44.386 15:15:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.386 15:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:07:44.386 15:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:07:44.386 15:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:44.386 15:15:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.386 15:15:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.386 [2024-11-20 15:15:30.840119] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:44.386 [2024-11-20 15:15:30.840298] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:44.386 [2024-11-20 15:15:30.840357] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:07:44.386 [2024-11-20 15:15:30.840436] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:44.386 [2024-11-20 15:15:30.843065] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:44.386 [2024-11-20 15:15:30.843210] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:44.386 [2024-11-20 15:15:30.843389] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:07:44.386 [2024-11-20 15:15:30.843528] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:44.386 pt1 00:07:44.386 15:15:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.386 15:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:07:44.386 15:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:44.386 15:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:44.386 15:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:44.386 15:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:44.386 15:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:44.386 15:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:44.386 15:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:44.386 15:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:44.386 15:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:44.386 15:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:44.386 15:15:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.386 15:15:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.386 15:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:44.644 15:15:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.645 15:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:44.645 "name": "raid_bdev1", 00:07:44.645 "uuid": "ad72266c-27ef-4b5d-a3e0-cefdeb890e8f", 00:07:44.645 "strip_size_kb": 64, 00:07:44.645 "state": "configuring", 00:07:44.645 "raid_level": "concat", 00:07:44.645 "superblock": true, 00:07:44.645 "num_base_bdevs": 2, 00:07:44.645 "num_base_bdevs_discovered": 1, 00:07:44.645 "num_base_bdevs_operational": 2, 00:07:44.645 "base_bdevs_list": [ 00:07:44.645 { 00:07:44.645 "name": "pt1", 00:07:44.645 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:44.645 "is_configured": true, 00:07:44.645 "data_offset": 2048, 00:07:44.645 "data_size": 63488 00:07:44.645 }, 00:07:44.645 { 00:07:44.645 "name": null, 00:07:44.645 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:44.645 "is_configured": false, 00:07:44.645 "data_offset": 2048, 00:07:44.645 "data_size": 63488 00:07:44.645 } 00:07:44.645 ] 00:07:44.645 }' 00:07:44.645 15:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:44.645 15:15:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.903 15:15:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:07:44.903 15:15:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:07:44.903 15:15:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:44.903 15:15:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:44.903 15:15:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.903 15:15:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.903 [2024-11-20 15:15:31.275541] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:44.903 [2024-11-20 15:15:31.275621] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:44.903 [2024-11-20 15:15:31.275646] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:07:44.903 [2024-11-20 15:15:31.275678] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:44.903 [2024-11-20 15:15:31.276142] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:44.903 [2024-11-20 15:15:31.276172] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:44.903 [2024-11-20 15:15:31.276261] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:07:44.903 [2024-11-20 15:15:31.276292] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:44.903 [2024-11-20 15:15:31.276399] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:44.903 [2024-11-20 15:15:31.276412] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:44.903 [2024-11-20 15:15:31.276686] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:07:44.903 [2024-11-20 15:15:31.276814] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:44.903 [2024-11-20 15:15:31.276823] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:44.903 [2024-11-20 15:15:31.276951] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:44.903 pt2 00:07:44.903 15:15:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.903 15:15:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:07:44.903 15:15:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:44.903 15:15:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:44.903 15:15:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:44.903 15:15:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:44.903 15:15:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:44.903 15:15:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:44.903 15:15:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:44.903 15:15:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:44.903 15:15:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:44.903 15:15:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:44.903 15:15:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:44.903 15:15:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:44.903 15:15:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:44.903 15:15:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.903 15:15:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.903 15:15:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.903 15:15:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:44.903 "name": "raid_bdev1", 00:07:44.903 "uuid": "ad72266c-27ef-4b5d-a3e0-cefdeb890e8f", 00:07:44.903 "strip_size_kb": 64, 00:07:44.903 "state": "online", 00:07:44.903 "raid_level": "concat", 00:07:44.903 "superblock": true, 00:07:44.903 "num_base_bdevs": 2, 00:07:44.903 "num_base_bdevs_discovered": 2, 00:07:44.903 "num_base_bdevs_operational": 2, 00:07:44.904 "base_bdevs_list": [ 00:07:44.904 { 00:07:44.904 "name": "pt1", 00:07:44.904 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:44.904 "is_configured": true, 00:07:44.904 "data_offset": 2048, 00:07:44.904 "data_size": 63488 00:07:44.904 }, 00:07:44.904 { 00:07:44.904 "name": "pt2", 00:07:44.904 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:44.904 "is_configured": true, 00:07:44.904 "data_offset": 2048, 00:07:44.904 "data_size": 63488 00:07:44.904 } 00:07:44.904 ] 00:07:44.904 }' 00:07:44.904 15:15:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:44.904 15:15:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.470 15:15:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:07:45.470 15:15:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:45.470 15:15:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:45.470 15:15:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:45.470 15:15:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:45.470 15:15:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:45.470 15:15:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:45.470 15:15:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.470 15:15:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:45.470 15:15:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.470 [2024-11-20 15:15:31.731735] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:45.470 15:15:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.470 15:15:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:45.470 "name": "raid_bdev1", 00:07:45.470 "aliases": [ 00:07:45.470 "ad72266c-27ef-4b5d-a3e0-cefdeb890e8f" 00:07:45.470 ], 00:07:45.470 "product_name": "Raid Volume", 00:07:45.470 "block_size": 512, 00:07:45.470 "num_blocks": 126976, 00:07:45.470 "uuid": "ad72266c-27ef-4b5d-a3e0-cefdeb890e8f", 00:07:45.470 "assigned_rate_limits": { 00:07:45.470 "rw_ios_per_sec": 0, 00:07:45.470 "rw_mbytes_per_sec": 0, 00:07:45.470 "r_mbytes_per_sec": 0, 00:07:45.470 "w_mbytes_per_sec": 0 00:07:45.470 }, 00:07:45.470 "claimed": false, 00:07:45.470 "zoned": false, 00:07:45.470 "supported_io_types": { 00:07:45.470 "read": true, 00:07:45.470 "write": true, 00:07:45.470 "unmap": true, 00:07:45.470 "flush": true, 00:07:45.470 "reset": true, 00:07:45.470 "nvme_admin": false, 00:07:45.470 "nvme_io": false, 00:07:45.470 "nvme_io_md": false, 00:07:45.470 "write_zeroes": true, 00:07:45.470 "zcopy": false, 00:07:45.470 "get_zone_info": false, 00:07:45.470 "zone_management": false, 00:07:45.470 "zone_append": false, 00:07:45.470 "compare": false, 00:07:45.470 "compare_and_write": false, 00:07:45.470 "abort": false, 00:07:45.470 "seek_hole": false, 00:07:45.470 "seek_data": false, 00:07:45.470 "copy": false, 00:07:45.470 "nvme_iov_md": false 00:07:45.470 }, 00:07:45.470 "memory_domains": [ 00:07:45.470 { 00:07:45.470 "dma_device_id": "system", 00:07:45.470 "dma_device_type": 1 00:07:45.470 }, 00:07:45.470 { 00:07:45.470 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:45.470 "dma_device_type": 2 00:07:45.470 }, 00:07:45.470 { 00:07:45.470 "dma_device_id": "system", 00:07:45.470 "dma_device_type": 1 00:07:45.470 }, 00:07:45.470 { 00:07:45.470 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:45.470 "dma_device_type": 2 00:07:45.470 } 00:07:45.470 ], 00:07:45.470 "driver_specific": { 00:07:45.470 "raid": { 00:07:45.470 "uuid": "ad72266c-27ef-4b5d-a3e0-cefdeb890e8f", 00:07:45.470 "strip_size_kb": 64, 00:07:45.470 "state": "online", 00:07:45.470 "raid_level": "concat", 00:07:45.470 "superblock": true, 00:07:45.470 "num_base_bdevs": 2, 00:07:45.470 "num_base_bdevs_discovered": 2, 00:07:45.470 "num_base_bdevs_operational": 2, 00:07:45.470 "base_bdevs_list": [ 00:07:45.470 { 00:07:45.470 "name": "pt1", 00:07:45.470 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:45.470 "is_configured": true, 00:07:45.470 "data_offset": 2048, 00:07:45.470 "data_size": 63488 00:07:45.470 }, 00:07:45.470 { 00:07:45.470 "name": "pt2", 00:07:45.470 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:45.470 "is_configured": true, 00:07:45.470 "data_offset": 2048, 00:07:45.470 "data_size": 63488 00:07:45.470 } 00:07:45.470 ] 00:07:45.470 } 00:07:45.470 } 00:07:45.470 }' 00:07:45.470 15:15:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:45.470 15:15:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:45.470 pt2' 00:07:45.470 15:15:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:45.470 15:15:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:45.470 15:15:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:45.470 15:15:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:45.470 15:15:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.470 15:15:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.471 15:15:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:45.471 15:15:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.471 15:15:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:45.471 15:15:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:45.471 15:15:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:45.471 15:15:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:45.471 15:15:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.471 15:15:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.471 15:15:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:45.471 15:15:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.471 15:15:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:45.471 15:15:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:45.471 15:15:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:07:45.471 15:15:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:45.471 15:15:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.471 15:15:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.471 [2024-11-20 15:15:31.947716] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:45.771 15:15:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.771 15:15:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' ad72266c-27ef-4b5d-a3e0-cefdeb890e8f '!=' ad72266c-27ef-4b5d-a3e0-cefdeb890e8f ']' 00:07:45.771 15:15:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:07:45.771 15:15:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:45.771 15:15:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:45.771 15:15:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 62084 00:07:45.771 15:15:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 62084 ']' 00:07:45.771 15:15:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 62084 00:07:45.771 15:15:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:07:45.771 15:15:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:45.771 15:15:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62084 00:07:45.771 killing process with pid 62084 00:07:45.771 15:15:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:45.771 15:15:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:45.771 15:15:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62084' 00:07:45.771 15:15:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 62084 00:07:45.771 [2024-11-20 15:15:32.024058] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:45.771 [2024-11-20 15:15:32.024151] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:45.771 15:15:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 62084 00:07:45.771 [2024-11-20 15:15:32.024203] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:45.771 [2024-11-20 15:15:32.024220] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:46.034 [2024-11-20 15:15:32.236444] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:46.968 15:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:07:46.968 ************************************ 00:07:46.968 END TEST raid_superblock_test 00:07:46.968 ************************************ 00:07:46.968 00:07:46.968 real 0m4.558s 00:07:46.968 user 0m6.414s 00:07:46.968 sys 0m0.852s 00:07:46.968 15:15:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:46.968 15:15:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.968 15:15:33 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 2 read 00:07:46.968 15:15:33 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:46.968 15:15:33 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:46.968 15:15:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:46.968 ************************************ 00:07:46.968 START TEST raid_read_error_test 00:07:46.968 ************************************ 00:07:46.968 15:15:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 2 read 00:07:46.968 15:15:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:07:46.968 15:15:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:46.968 15:15:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:07:46.969 15:15:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:46.969 15:15:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:46.969 15:15:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:46.969 15:15:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:46.969 15:15:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:46.969 15:15:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:46.969 15:15:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:46.969 15:15:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:46.969 15:15:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:46.969 15:15:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:46.969 15:15:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:46.969 15:15:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:46.969 15:15:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:46.969 15:15:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:46.969 15:15:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:46.969 15:15:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:07:46.969 15:15:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:46.969 15:15:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:46.969 15:15:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:47.227 15:15:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.nAxtJVt4sH 00:07:47.227 15:15:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=62291 00:07:47.227 15:15:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:47.227 15:15:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 62291 00:07:47.227 15:15:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 62291 ']' 00:07:47.227 15:15:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:47.227 15:15:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:47.227 15:15:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:47.227 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:47.227 15:15:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:47.227 15:15:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.227 [2024-11-20 15:15:33.544374] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:07:47.227 [2024-11-20 15:15:33.544500] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62291 ] 00:07:47.486 [2024-11-20 15:15:33.724429] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:47.486 [2024-11-20 15:15:33.846803] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.745 [2024-11-20 15:15:34.062072] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:47.745 [2024-11-20 15:15:34.062247] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:48.004 15:15:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:48.004 15:15:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:07:48.004 15:15:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:48.004 15:15:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:48.004 15:15:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.004 15:15:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.004 BaseBdev1_malloc 00:07:48.004 15:15:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.004 15:15:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:48.004 15:15:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.004 15:15:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.004 true 00:07:48.004 15:15:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.004 15:15:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:48.004 15:15:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.004 15:15:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.004 [2024-11-20 15:15:34.445128] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:48.004 [2024-11-20 15:15:34.445188] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:48.004 [2024-11-20 15:15:34.445211] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:48.004 [2024-11-20 15:15:34.445225] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:48.004 [2024-11-20 15:15:34.447668] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:48.004 [2024-11-20 15:15:34.447713] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:48.004 BaseBdev1 00:07:48.004 15:15:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.004 15:15:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:48.004 15:15:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:48.004 15:15:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.004 15:15:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.263 BaseBdev2_malloc 00:07:48.263 15:15:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.263 15:15:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:48.263 15:15:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.263 15:15:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.263 true 00:07:48.263 15:15:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.263 15:15:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:48.263 15:15:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.263 15:15:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.263 [2024-11-20 15:15:34.513594] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:48.263 [2024-11-20 15:15:34.513651] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:48.263 [2024-11-20 15:15:34.513686] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:48.263 [2024-11-20 15:15:34.513700] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:48.263 [2024-11-20 15:15:34.516098] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:48.263 [2024-11-20 15:15:34.516408] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:48.263 BaseBdev2 00:07:48.263 15:15:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.263 15:15:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:48.263 15:15:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.263 15:15:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.263 [2024-11-20 15:15:34.525646] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:48.263 [2024-11-20 15:15:34.527776] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:48.263 [2024-11-20 15:15:34.527974] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:48.263 [2024-11-20 15:15:34.527992] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:48.263 [2024-11-20 15:15:34.528260] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:07:48.263 [2024-11-20 15:15:34.528498] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:48.263 [2024-11-20 15:15:34.528521] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:48.263 [2024-11-20 15:15:34.528749] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:48.263 15:15:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.263 15:15:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:48.263 15:15:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:48.263 15:15:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:48.263 15:15:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:48.263 15:15:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:48.264 15:15:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:48.264 15:15:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:48.264 15:15:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:48.264 15:15:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:48.264 15:15:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:48.264 15:15:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:48.264 15:15:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:48.264 15:15:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.264 15:15:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.264 15:15:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.264 15:15:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:48.264 "name": "raid_bdev1", 00:07:48.264 "uuid": "bf03a99d-fb2c-4217-bffd-d66f56114e3f", 00:07:48.264 "strip_size_kb": 64, 00:07:48.264 "state": "online", 00:07:48.264 "raid_level": "concat", 00:07:48.264 "superblock": true, 00:07:48.264 "num_base_bdevs": 2, 00:07:48.264 "num_base_bdevs_discovered": 2, 00:07:48.264 "num_base_bdevs_operational": 2, 00:07:48.264 "base_bdevs_list": [ 00:07:48.264 { 00:07:48.264 "name": "BaseBdev1", 00:07:48.264 "uuid": "4a89e651-1c32-5b77-8569-d9661f74d072", 00:07:48.264 "is_configured": true, 00:07:48.264 "data_offset": 2048, 00:07:48.264 "data_size": 63488 00:07:48.264 }, 00:07:48.264 { 00:07:48.264 "name": "BaseBdev2", 00:07:48.264 "uuid": "98e7411c-be8f-5846-84b1-12ddf4618b75", 00:07:48.264 "is_configured": true, 00:07:48.264 "data_offset": 2048, 00:07:48.264 "data_size": 63488 00:07:48.264 } 00:07:48.264 ] 00:07:48.264 }' 00:07:48.264 15:15:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:48.264 15:15:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.522 15:15:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:48.522 15:15:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:48.781 [2024-11-20 15:15:35.034280] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:07:49.717 15:15:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:07:49.717 15:15:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.717 15:15:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.717 15:15:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.717 15:15:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:49.717 15:15:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:07:49.717 15:15:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:49.717 15:15:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:49.717 15:15:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:49.717 15:15:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:49.717 15:15:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:49.717 15:15:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:49.717 15:15:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:49.717 15:15:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:49.717 15:15:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:49.717 15:15:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:49.717 15:15:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:49.717 15:15:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:49.717 15:15:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.717 15:15:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.717 15:15:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:49.717 15:15:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.717 15:15:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:49.717 "name": "raid_bdev1", 00:07:49.717 "uuid": "bf03a99d-fb2c-4217-bffd-d66f56114e3f", 00:07:49.717 "strip_size_kb": 64, 00:07:49.717 "state": "online", 00:07:49.717 "raid_level": "concat", 00:07:49.717 "superblock": true, 00:07:49.717 "num_base_bdevs": 2, 00:07:49.717 "num_base_bdevs_discovered": 2, 00:07:49.717 "num_base_bdevs_operational": 2, 00:07:49.717 "base_bdevs_list": [ 00:07:49.717 { 00:07:49.717 "name": "BaseBdev1", 00:07:49.717 "uuid": "4a89e651-1c32-5b77-8569-d9661f74d072", 00:07:49.717 "is_configured": true, 00:07:49.717 "data_offset": 2048, 00:07:49.717 "data_size": 63488 00:07:49.717 }, 00:07:49.717 { 00:07:49.717 "name": "BaseBdev2", 00:07:49.717 "uuid": "98e7411c-be8f-5846-84b1-12ddf4618b75", 00:07:49.717 "is_configured": true, 00:07:49.717 "data_offset": 2048, 00:07:49.717 "data_size": 63488 00:07:49.717 } 00:07:49.717 ] 00:07:49.717 }' 00:07:49.717 15:15:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:49.717 15:15:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.977 15:15:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:49.977 15:15:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.977 15:15:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.977 [2024-11-20 15:15:36.363643] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:49.977 [2024-11-20 15:15:36.363693] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:49.977 [2024-11-20 15:15:36.366306] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:49.977 [2024-11-20 15:15:36.366502] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:49.977 [2024-11-20 15:15:36.366550] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:49.977 [2024-11-20 15:15:36.366567] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:49.977 { 00:07:49.977 "results": [ 00:07:49.977 { 00:07:49.977 "job": "raid_bdev1", 00:07:49.977 "core_mask": "0x1", 00:07:49.977 "workload": "randrw", 00:07:49.977 "percentage": 50, 00:07:49.977 "status": "finished", 00:07:49.977 "queue_depth": 1, 00:07:49.977 "io_size": 131072, 00:07:49.977 "runtime": 1.328568, 00:07:49.977 "iops": 14754.231623823545, 00:07:49.977 "mibps": 1844.2789529779432, 00:07:49.977 "io_failed": 1, 00:07:49.977 "io_timeout": 0, 00:07:49.977 "avg_latency_us": 93.33062478962424, 00:07:49.977 "min_latency_us": 26.936546184738955, 00:07:49.977 "max_latency_us": 1447.5823293172691 00:07:49.977 } 00:07:49.977 ], 00:07:49.977 "core_count": 1 00:07:49.977 } 00:07:49.977 15:15:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.977 15:15:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 62291 00:07:49.977 15:15:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 62291 ']' 00:07:49.977 15:15:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 62291 00:07:49.977 15:15:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:07:49.977 15:15:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:49.977 15:15:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62291 00:07:49.977 15:15:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:49.977 15:15:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:49.977 15:15:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62291' 00:07:49.977 killing process with pid 62291 00:07:49.977 15:15:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 62291 00:07:49.977 [2024-11-20 15:15:36.418556] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:49.977 15:15:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 62291 00:07:50.237 [2024-11-20 15:15:36.556719] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:51.693 15:15:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.nAxtJVt4sH 00:07:51.693 15:15:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:51.693 15:15:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:51.693 15:15:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.75 00:07:51.693 15:15:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:07:51.693 15:15:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:51.693 15:15:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:51.693 15:15:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.75 != \0\.\0\0 ]] 00:07:51.693 00:07:51.693 real 0m4.381s 00:07:51.693 user 0m5.163s 00:07:51.693 sys 0m0.557s 00:07:51.693 ************************************ 00:07:51.693 END TEST raid_read_error_test 00:07:51.693 ************************************ 00:07:51.693 15:15:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:51.693 15:15:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.693 15:15:37 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 2 write 00:07:51.693 15:15:37 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:51.693 15:15:37 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:51.693 15:15:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:51.693 ************************************ 00:07:51.693 START TEST raid_write_error_test 00:07:51.693 ************************************ 00:07:51.693 15:15:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 2 write 00:07:51.693 15:15:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:07:51.693 15:15:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:51.693 15:15:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:07:51.693 15:15:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:51.693 15:15:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:51.693 15:15:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:51.693 15:15:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:51.693 15:15:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:51.693 15:15:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:51.693 15:15:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:51.693 15:15:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:51.693 15:15:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:51.693 15:15:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:51.693 15:15:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:51.693 15:15:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:51.693 15:15:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:51.693 15:15:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:51.693 15:15:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:51.693 15:15:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:07:51.693 15:15:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:51.693 15:15:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:51.694 15:15:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:51.694 15:15:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.ugbBKjBFUs 00:07:51.694 15:15:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=62431 00:07:51.694 15:15:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:51.694 15:15:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 62431 00:07:51.694 15:15:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 62431 ']' 00:07:51.694 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:51.694 15:15:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:51.694 15:15:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:51.694 15:15:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:51.694 15:15:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:51.694 15:15:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.694 [2024-11-20 15:15:38.022513] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:07:51.694 [2024-11-20 15:15:38.022647] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62431 ] 00:07:51.953 [2024-11-20 15:15:38.209107] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:51.953 [2024-11-20 15:15:38.338343] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.212 [2024-11-20 15:15:38.565627] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:52.212 [2024-11-20 15:15:38.565687] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:52.782 15:15:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:52.782 15:15:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:07:52.782 15:15:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:52.782 15:15:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:52.782 15:15:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.782 15:15:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.782 BaseBdev1_malloc 00:07:52.782 15:15:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.782 15:15:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:52.782 15:15:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.782 15:15:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.782 true 00:07:52.782 15:15:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.782 15:15:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:52.782 15:15:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.782 15:15:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.782 [2024-11-20 15:15:39.024347] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:52.782 [2024-11-20 15:15:39.024410] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:52.782 [2024-11-20 15:15:39.024432] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:52.782 [2024-11-20 15:15:39.024447] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:52.782 [2024-11-20 15:15:39.026813] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:52.782 [2024-11-20 15:15:39.026858] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:52.782 BaseBdev1 00:07:52.782 15:15:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.782 15:15:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:52.782 15:15:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:52.782 15:15:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.782 15:15:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.782 BaseBdev2_malloc 00:07:52.782 15:15:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.782 15:15:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:52.782 15:15:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.782 15:15:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.782 true 00:07:52.782 15:15:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.782 15:15:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:52.782 15:15:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.782 15:15:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.782 [2024-11-20 15:15:39.093297] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:52.782 [2024-11-20 15:15:39.093361] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:52.782 [2024-11-20 15:15:39.093381] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:52.782 [2024-11-20 15:15:39.093395] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:52.782 [2024-11-20 15:15:39.095907] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:52.782 [2024-11-20 15:15:39.095953] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:52.782 BaseBdev2 00:07:52.782 15:15:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.782 15:15:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:52.782 15:15:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.782 15:15:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.782 [2024-11-20 15:15:39.105338] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:52.782 [2024-11-20 15:15:39.107524] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:52.782 [2024-11-20 15:15:39.107736] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:52.782 [2024-11-20 15:15:39.107755] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:52.782 [2024-11-20 15:15:39.108015] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:07:52.782 [2024-11-20 15:15:39.108200] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:52.782 [2024-11-20 15:15:39.108214] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:52.782 [2024-11-20 15:15:39.108376] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:52.782 15:15:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.782 15:15:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:52.782 15:15:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:52.782 15:15:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:52.782 15:15:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:52.782 15:15:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:52.782 15:15:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:52.782 15:15:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:52.782 15:15:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:52.782 15:15:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:52.782 15:15:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:52.782 15:15:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:52.782 15:15:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:52.782 15:15:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.782 15:15:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.782 15:15:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.782 15:15:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:52.782 "name": "raid_bdev1", 00:07:52.782 "uuid": "171ec862-db46-48de-8675-c27feb17eed5", 00:07:52.782 "strip_size_kb": 64, 00:07:52.782 "state": "online", 00:07:52.782 "raid_level": "concat", 00:07:52.782 "superblock": true, 00:07:52.782 "num_base_bdevs": 2, 00:07:52.782 "num_base_bdevs_discovered": 2, 00:07:52.782 "num_base_bdevs_operational": 2, 00:07:52.782 "base_bdevs_list": [ 00:07:52.782 { 00:07:52.782 "name": "BaseBdev1", 00:07:52.782 "uuid": "eca756f8-c71b-584b-8736-13beac77d59f", 00:07:52.782 "is_configured": true, 00:07:52.782 "data_offset": 2048, 00:07:52.782 "data_size": 63488 00:07:52.782 }, 00:07:52.782 { 00:07:52.782 "name": "BaseBdev2", 00:07:52.782 "uuid": "e7879066-b8b4-585b-9251-d558604613ed", 00:07:52.782 "is_configured": true, 00:07:52.782 "data_offset": 2048, 00:07:52.782 "data_size": 63488 00:07:52.782 } 00:07:52.782 ] 00:07:52.782 }' 00:07:52.782 15:15:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:52.782 15:15:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.350 15:15:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:53.350 15:15:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:53.350 [2024-11-20 15:15:39.641971] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:07:54.287 15:15:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:07:54.287 15:15:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.287 15:15:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.287 15:15:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.287 15:15:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:54.287 15:15:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:07:54.287 15:15:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:54.287 15:15:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:54.287 15:15:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:54.287 15:15:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:54.288 15:15:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:54.288 15:15:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:54.288 15:15:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:54.288 15:15:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:54.288 15:15:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:54.288 15:15:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:54.288 15:15:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:54.288 15:15:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:54.288 15:15:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:54.288 15:15:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.288 15:15:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.288 15:15:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.288 15:15:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:54.288 "name": "raid_bdev1", 00:07:54.288 "uuid": "171ec862-db46-48de-8675-c27feb17eed5", 00:07:54.288 "strip_size_kb": 64, 00:07:54.288 "state": "online", 00:07:54.288 "raid_level": "concat", 00:07:54.288 "superblock": true, 00:07:54.288 "num_base_bdevs": 2, 00:07:54.288 "num_base_bdevs_discovered": 2, 00:07:54.288 "num_base_bdevs_operational": 2, 00:07:54.288 "base_bdevs_list": [ 00:07:54.288 { 00:07:54.288 "name": "BaseBdev1", 00:07:54.288 "uuid": "eca756f8-c71b-584b-8736-13beac77d59f", 00:07:54.288 "is_configured": true, 00:07:54.288 "data_offset": 2048, 00:07:54.288 "data_size": 63488 00:07:54.288 }, 00:07:54.288 { 00:07:54.288 "name": "BaseBdev2", 00:07:54.288 "uuid": "e7879066-b8b4-585b-9251-d558604613ed", 00:07:54.288 "is_configured": true, 00:07:54.288 "data_offset": 2048, 00:07:54.288 "data_size": 63488 00:07:54.288 } 00:07:54.288 ] 00:07:54.288 }' 00:07:54.288 15:15:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:54.288 15:15:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.547 15:15:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:54.547 15:15:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.547 15:15:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.547 [2024-11-20 15:15:40.968607] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:54.547 [2024-11-20 15:15:40.968811] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:54.547 [2024-11-20 15:15:40.971462] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:54.547 [2024-11-20 15:15:40.971507] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:54.547 [2024-11-20 15:15:40.971538] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:54.547 [2024-11-20 15:15:40.971552] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:54.548 { 00:07:54.548 "results": [ 00:07:54.548 { 00:07:54.548 "job": "raid_bdev1", 00:07:54.548 "core_mask": "0x1", 00:07:54.548 "workload": "randrw", 00:07:54.548 "percentage": 50, 00:07:54.548 "status": "finished", 00:07:54.548 "queue_depth": 1, 00:07:54.548 "io_size": 131072, 00:07:54.548 "runtime": 1.327014, 00:07:54.548 "iops": 16826.499192924868, 00:07:54.548 "mibps": 2103.3123991156085, 00:07:54.548 "io_failed": 1, 00:07:54.548 "io_timeout": 0, 00:07:54.548 "avg_latency_us": 81.69431251202751, 00:07:54.548 "min_latency_us": 26.936546184738955, 00:07:54.548 "max_latency_us": 1427.8425702811246 00:07:54.548 } 00:07:54.548 ], 00:07:54.548 "core_count": 1 00:07:54.548 } 00:07:54.548 15:15:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.548 15:15:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 62431 00:07:54.548 15:15:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 62431 ']' 00:07:54.548 15:15:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 62431 00:07:54.548 15:15:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:07:54.548 15:15:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:54.548 15:15:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62431 00:07:54.548 killing process with pid 62431 00:07:54.548 15:15:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:54.548 15:15:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:54.548 15:15:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62431' 00:07:54.548 15:15:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 62431 00:07:54.548 [2024-11-20 15:15:41.017473] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:54.548 15:15:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 62431 00:07:54.817 [2024-11-20 15:15:41.154322] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:56.237 15:15:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:56.237 15:15:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:56.237 15:15:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.ugbBKjBFUs 00:07:56.237 15:15:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.75 00:07:56.237 15:15:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:07:56.237 ************************************ 00:07:56.237 END TEST raid_write_error_test 00:07:56.237 ************************************ 00:07:56.237 15:15:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:56.237 15:15:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:56.237 15:15:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.75 != \0\.\0\0 ]] 00:07:56.237 00:07:56.237 real 0m4.461s 00:07:56.237 user 0m5.361s 00:07:56.237 sys 0m0.596s 00:07:56.237 15:15:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:56.237 15:15:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.237 15:15:42 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:07:56.237 15:15:42 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:07:56.237 15:15:42 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:56.237 15:15:42 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:56.237 15:15:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:56.237 ************************************ 00:07:56.237 START TEST raid_state_function_test 00:07:56.237 ************************************ 00:07:56.237 15:15:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 false 00:07:56.237 15:15:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:07:56.237 15:15:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:56.237 15:15:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:07:56.237 15:15:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:56.237 15:15:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:56.237 15:15:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:56.237 15:15:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:56.237 15:15:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:56.237 15:15:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:56.237 15:15:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:56.237 15:15:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:56.237 15:15:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:56.237 15:15:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:56.237 15:15:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:56.237 15:15:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:56.237 15:15:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:56.237 15:15:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:56.237 15:15:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:56.237 15:15:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:07:56.237 15:15:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:07:56.237 15:15:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:07:56.237 15:15:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:07:56.237 15:15:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=62575 00:07:56.237 15:15:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:56.237 Process raid pid: 62575 00:07:56.237 15:15:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 62575' 00:07:56.237 15:15:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 62575 00:07:56.237 15:15:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 62575 ']' 00:07:56.237 15:15:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:56.237 15:15:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:56.237 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:56.237 15:15:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:56.237 15:15:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:56.237 15:15:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.237 [2024-11-20 15:15:42.531048] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:07:56.237 [2024-11-20 15:15:42.531180] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:56.237 [2024-11-20 15:15:42.711862] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:56.497 [2024-11-20 15:15:42.833879] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:56.756 [2024-11-20 15:15:43.058611] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:56.756 [2024-11-20 15:15:43.058675] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:57.015 15:15:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:57.015 15:15:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:07:57.015 15:15:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:57.015 15:15:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.015 15:15:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.015 [2024-11-20 15:15:43.443892] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:57.015 [2024-11-20 15:15:43.443955] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:57.015 [2024-11-20 15:15:43.443968] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:57.015 [2024-11-20 15:15:43.443982] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:57.015 15:15:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.015 15:15:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:57.015 15:15:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:57.015 15:15:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:57.015 15:15:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:57.015 15:15:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:57.015 15:15:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:57.015 15:15:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:57.015 15:15:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:57.015 15:15:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:57.015 15:15:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:57.015 15:15:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:57.015 15:15:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.015 15:15:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.015 15:15:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:57.015 15:15:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.015 15:15:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:57.015 "name": "Existed_Raid", 00:07:57.015 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:57.015 "strip_size_kb": 0, 00:07:57.015 "state": "configuring", 00:07:57.015 "raid_level": "raid1", 00:07:57.015 "superblock": false, 00:07:57.015 "num_base_bdevs": 2, 00:07:57.015 "num_base_bdevs_discovered": 0, 00:07:57.015 "num_base_bdevs_operational": 2, 00:07:57.015 "base_bdevs_list": [ 00:07:57.015 { 00:07:57.015 "name": "BaseBdev1", 00:07:57.015 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:57.015 "is_configured": false, 00:07:57.015 "data_offset": 0, 00:07:57.015 "data_size": 0 00:07:57.015 }, 00:07:57.015 { 00:07:57.016 "name": "BaseBdev2", 00:07:57.016 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:57.016 "is_configured": false, 00:07:57.016 "data_offset": 0, 00:07:57.016 "data_size": 0 00:07:57.016 } 00:07:57.016 ] 00:07:57.016 }' 00:07:57.016 15:15:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:57.274 15:15:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.534 15:15:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:57.534 15:15:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.534 15:15:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.534 [2024-11-20 15:15:43.863454] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:57.534 [2024-11-20 15:15:43.863622] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:57.534 15:15:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.534 15:15:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:57.534 15:15:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.534 15:15:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.534 [2024-11-20 15:15:43.875441] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:57.534 [2024-11-20 15:15:43.875598] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:57.534 [2024-11-20 15:15:43.875728] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:57.534 [2024-11-20 15:15:43.875845] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:57.534 15:15:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.534 15:15:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:57.534 15:15:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.534 15:15:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.534 [2024-11-20 15:15:43.926204] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:57.534 BaseBdev1 00:07:57.534 15:15:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.534 15:15:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:57.534 15:15:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:07:57.534 15:15:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:57.534 15:15:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:57.534 15:15:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:57.534 15:15:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:57.534 15:15:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:57.534 15:15:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.534 15:15:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.534 15:15:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.534 15:15:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:57.534 15:15:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.534 15:15:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.534 [ 00:07:57.534 { 00:07:57.534 "name": "BaseBdev1", 00:07:57.534 "aliases": [ 00:07:57.534 "1a2b52dd-94dd-443b-94c5-00eda54a5293" 00:07:57.534 ], 00:07:57.534 "product_name": "Malloc disk", 00:07:57.534 "block_size": 512, 00:07:57.534 "num_blocks": 65536, 00:07:57.534 "uuid": "1a2b52dd-94dd-443b-94c5-00eda54a5293", 00:07:57.534 "assigned_rate_limits": { 00:07:57.534 "rw_ios_per_sec": 0, 00:07:57.534 "rw_mbytes_per_sec": 0, 00:07:57.534 "r_mbytes_per_sec": 0, 00:07:57.534 "w_mbytes_per_sec": 0 00:07:57.534 }, 00:07:57.534 "claimed": true, 00:07:57.534 "claim_type": "exclusive_write", 00:07:57.534 "zoned": false, 00:07:57.534 "supported_io_types": { 00:07:57.534 "read": true, 00:07:57.534 "write": true, 00:07:57.534 "unmap": true, 00:07:57.534 "flush": true, 00:07:57.534 "reset": true, 00:07:57.534 "nvme_admin": false, 00:07:57.534 "nvme_io": false, 00:07:57.534 "nvme_io_md": false, 00:07:57.534 "write_zeroes": true, 00:07:57.534 "zcopy": true, 00:07:57.534 "get_zone_info": false, 00:07:57.534 "zone_management": false, 00:07:57.534 "zone_append": false, 00:07:57.534 "compare": false, 00:07:57.534 "compare_and_write": false, 00:07:57.534 "abort": true, 00:07:57.534 "seek_hole": false, 00:07:57.534 "seek_data": false, 00:07:57.534 "copy": true, 00:07:57.534 "nvme_iov_md": false 00:07:57.534 }, 00:07:57.534 "memory_domains": [ 00:07:57.534 { 00:07:57.534 "dma_device_id": "system", 00:07:57.534 "dma_device_type": 1 00:07:57.534 }, 00:07:57.534 { 00:07:57.534 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:57.534 "dma_device_type": 2 00:07:57.534 } 00:07:57.534 ], 00:07:57.534 "driver_specific": {} 00:07:57.534 } 00:07:57.534 ] 00:07:57.534 15:15:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.534 15:15:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:57.534 15:15:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:57.534 15:15:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:57.534 15:15:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:57.534 15:15:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:57.534 15:15:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:57.534 15:15:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:57.534 15:15:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:57.534 15:15:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:57.534 15:15:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:57.534 15:15:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:57.534 15:15:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:57.534 15:15:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:57.534 15:15:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.534 15:15:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.534 15:15:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.534 15:15:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:57.534 "name": "Existed_Raid", 00:07:57.534 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:57.534 "strip_size_kb": 0, 00:07:57.535 "state": "configuring", 00:07:57.535 "raid_level": "raid1", 00:07:57.535 "superblock": false, 00:07:57.535 "num_base_bdevs": 2, 00:07:57.535 "num_base_bdevs_discovered": 1, 00:07:57.535 "num_base_bdevs_operational": 2, 00:07:57.535 "base_bdevs_list": [ 00:07:57.535 { 00:07:57.535 "name": "BaseBdev1", 00:07:57.535 "uuid": "1a2b52dd-94dd-443b-94c5-00eda54a5293", 00:07:57.535 "is_configured": true, 00:07:57.535 "data_offset": 0, 00:07:57.535 "data_size": 65536 00:07:57.535 }, 00:07:57.535 { 00:07:57.535 "name": "BaseBdev2", 00:07:57.535 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:57.535 "is_configured": false, 00:07:57.535 "data_offset": 0, 00:07:57.535 "data_size": 0 00:07:57.535 } 00:07:57.535 ] 00:07:57.535 }' 00:07:57.535 15:15:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:57.535 15:15:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.101 15:15:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:58.101 15:15:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.101 15:15:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.101 [2024-11-20 15:15:44.389594] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:58.101 [2024-11-20 15:15:44.389647] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:07:58.102 15:15:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.102 15:15:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:58.102 15:15:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.102 15:15:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.102 [2024-11-20 15:15:44.401612] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:58.102 [2024-11-20 15:15:44.403722] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:58.102 [2024-11-20 15:15:44.403884] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:58.102 15:15:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.102 15:15:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:58.102 15:15:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:58.102 15:15:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:58.102 15:15:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:58.102 15:15:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:58.102 15:15:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:58.102 15:15:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:58.102 15:15:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:58.102 15:15:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:58.102 15:15:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:58.102 15:15:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:58.102 15:15:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:58.102 15:15:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:58.102 15:15:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:58.102 15:15:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.102 15:15:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.102 15:15:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.102 15:15:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:58.102 "name": "Existed_Raid", 00:07:58.102 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:58.102 "strip_size_kb": 0, 00:07:58.102 "state": "configuring", 00:07:58.102 "raid_level": "raid1", 00:07:58.102 "superblock": false, 00:07:58.102 "num_base_bdevs": 2, 00:07:58.102 "num_base_bdevs_discovered": 1, 00:07:58.102 "num_base_bdevs_operational": 2, 00:07:58.102 "base_bdevs_list": [ 00:07:58.102 { 00:07:58.102 "name": "BaseBdev1", 00:07:58.102 "uuid": "1a2b52dd-94dd-443b-94c5-00eda54a5293", 00:07:58.102 "is_configured": true, 00:07:58.102 "data_offset": 0, 00:07:58.102 "data_size": 65536 00:07:58.102 }, 00:07:58.102 { 00:07:58.102 "name": "BaseBdev2", 00:07:58.102 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:58.102 "is_configured": false, 00:07:58.102 "data_offset": 0, 00:07:58.102 "data_size": 0 00:07:58.102 } 00:07:58.102 ] 00:07:58.102 }' 00:07:58.102 15:15:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:58.102 15:15:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.361 15:15:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:58.361 15:15:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.361 15:15:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.621 [2024-11-20 15:15:44.864347] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:58.621 [2024-11-20 15:15:44.864585] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:58.621 [2024-11-20 15:15:44.864604] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:07:58.621 [2024-11-20 15:15:44.864913] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:58.621 [2024-11-20 15:15:44.865090] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:58.621 [2024-11-20 15:15:44.865104] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:07:58.621 [2024-11-20 15:15:44.865366] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:58.621 BaseBdev2 00:07:58.621 15:15:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.621 15:15:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:58.621 15:15:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:07:58.621 15:15:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:58.621 15:15:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:58.621 15:15:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:58.621 15:15:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:58.621 15:15:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:58.621 15:15:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.621 15:15:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.621 15:15:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.621 15:15:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:58.621 15:15:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.621 15:15:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.621 [ 00:07:58.621 { 00:07:58.621 "name": "BaseBdev2", 00:07:58.621 "aliases": [ 00:07:58.621 "124a43bc-7072-466d-8b88-e40a4fd9e721" 00:07:58.621 ], 00:07:58.621 "product_name": "Malloc disk", 00:07:58.621 "block_size": 512, 00:07:58.621 "num_blocks": 65536, 00:07:58.621 "uuid": "124a43bc-7072-466d-8b88-e40a4fd9e721", 00:07:58.621 "assigned_rate_limits": { 00:07:58.621 "rw_ios_per_sec": 0, 00:07:58.621 "rw_mbytes_per_sec": 0, 00:07:58.621 "r_mbytes_per_sec": 0, 00:07:58.621 "w_mbytes_per_sec": 0 00:07:58.621 }, 00:07:58.621 "claimed": true, 00:07:58.621 "claim_type": "exclusive_write", 00:07:58.621 "zoned": false, 00:07:58.621 "supported_io_types": { 00:07:58.621 "read": true, 00:07:58.621 "write": true, 00:07:58.621 "unmap": true, 00:07:58.621 "flush": true, 00:07:58.621 "reset": true, 00:07:58.621 "nvme_admin": false, 00:07:58.621 "nvme_io": false, 00:07:58.621 "nvme_io_md": false, 00:07:58.621 "write_zeroes": true, 00:07:58.621 "zcopy": true, 00:07:58.621 "get_zone_info": false, 00:07:58.621 "zone_management": false, 00:07:58.621 "zone_append": false, 00:07:58.621 "compare": false, 00:07:58.621 "compare_and_write": false, 00:07:58.621 "abort": true, 00:07:58.621 "seek_hole": false, 00:07:58.621 "seek_data": false, 00:07:58.621 "copy": true, 00:07:58.621 "nvme_iov_md": false 00:07:58.621 }, 00:07:58.621 "memory_domains": [ 00:07:58.621 { 00:07:58.621 "dma_device_id": "system", 00:07:58.621 "dma_device_type": 1 00:07:58.621 }, 00:07:58.621 { 00:07:58.621 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:58.621 "dma_device_type": 2 00:07:58.621 } 00:07:58.621 ], 00:07:58.621 "driver_specific": {} 00:07:58.621 } 00:07:58.621 ] 00:07:58.621 15:15:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.621 15:15:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:58.621 15:15:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:58.621 15:15:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:58.621 15:15:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:07:58.621 15:15:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:58.621 15:15:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:58.621 15:15:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:58.621 15:15:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:58.621 15:15:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:58.621 15:15:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:58.621 15:15:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:58.621 15:15:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:58.621 15:15:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:58.621 15:15:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:58.621 15:15:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.621 15:15:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.621 15:15:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:58.621 15:15:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.621 15:15:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:58.621 "name": "Existed_Raid", 00:07:58.621 "uuid": "b76176d9-5d34-49ff-88e9-ed188cbb453d", 00:07:58.621 "strip_size_kb": 0, 00:07:58.621 "state": "online", 00:07:58.621 "raid_level": "raid1", 00:07:58.621 "superblock": false, 00:07:58.621 "num_base_bdevs": 2, 00:07:58.621 "num_base_bdevs_discovered": 2, 00:07:58.621 "num_base_bdevs_operational": 2, 00:07:58.621 "base_bdevs_list": [ 00:07:58.621 { 00:07:58.621 "name": "BaseBdev1", 00:07:58.621 "uuid": "1a2b52dd-94dd-443b-94c5-00eda54a5293", 00:07:58.621 "is_configured": true, 00:07:58.621 "data_offset": 0, 00:07:58.621 "data_size": 65536 00:07:58.621 }, 00:07:58.621 { 00:07:58.621 "name": "BaseBdev2", 00:07:58.621 "uuid": "124a43bc-7072-466d-8b88-e40a4fd9e721", 00:07:58.621 "is_configured": true, 00:07:58.621 "data_offset": 0, 00:07:58.621 "data_size": 65536 00:07:58.621 } 00:07:58.621 ] 00:07:58.621 }' 00:07:58.621 15:15:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:58.621 15:15:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.880 15:15:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:58.880 15:15:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:58.880 15:15:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:58.880 15:15:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:58.880 15:15:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:58.880 15:15:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:58.880 15:15:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:58.880 15:15:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:58.880 15:15:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.880 15:15:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.880 [2024-11-20 15:15:45.356031] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:59.140 15:15:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.140 15:15:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:59.140 "name": "Existed_Raid", 00:07:59.140 "aliases": [ 00:07:59.140 "b76176d9-5d34-49ff-88e9-ed188cbb453d" 00:07:59.140 ], 00:07:59.140 "product_name": "Raid Volume", 00:07:59.140 "block_size": 512, 00:07:59.140 "num_blocks": 65536, 00:07:59.140 "uuid": "b76176d9-5d34-49ff-88e9-ed188cbb453d", 00:07:59.140 "assigned_rate_limits": { 00:07:59.140 "rw_ios_per_sec": 0, 00:07:59.140 "rw_mbytes_per_sec": 0, 00:07:59.140 "r_mbytes_per_sec": 0, 00:07:59.140 "w_mbytes_per_sec": 0 00:07:59.140 }, 00:07:59.140 "claimed": false, 00:07:59.140 "zoned": false, 00:07:59.140 "supported_io_types": { 00:07:59.140 "read": true, 00:07:59.140 "write": true, 00:07:59.140 "unmap": false, 00:07:59.140 "flush": false, 00:07:59.140 "reset": true, 00:07:59.140 "nvme_admin": false, 00:07:59.140 "nvme_io": false, 00:07:59.140 "nvme_io_md": false, 00:07:59.140 "write_zeroes": true, 00:07:59.140 "zcopy": false, 00:07:59.140 "get_zone_info": false, 00:07:59.140 "zone_management": false, 00:07:59.140 "zone_append": false, 00:07:59.140 "compare": false, 00:07:59.140 "compare_and_write": false, 00:07:59.140 "abort": false, 00:07:59.140 "seek_hole": false, 00:07:59.140 "seek_data": false, 00:07:59.140 "copy": false, 00:07:59.140 "nvme_iov_md": false 00:07:59.140 }, 00:07:59.140 "memory_domains": [ 00:07:59.140 { 00:07:59.140 "dma_device_id": "system", 00:07:59.140 "dma_device_type": 1 00:07:59.140 }, 00:07:59.140 { 00:07:59.140 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:59.140 "dma_device_type": 2 00:07:59.140 }, 00:07:59.140 { 00:07:59.140 "dma_device_id": "system", 00:07:59.140 "dma_device_type": 1 00:07:59.140 }, 00:07:59.140 { 00:07:59.140 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:59.140 "dma_device_type": 2 00:07:59.140 } 00:07:59.140 ], 00:07:59.140 "driver_specific": { 00:07:59.140 "raid": { 00:07:59.140 "uuid": "b76176d9-5d34-49ff-88e9-ed188cbb453d", 00:07:59.140 "strip_size_kb": 0, 00:07:59.140 "state": "online", 00:07:59.140 "raid_level": "raid1", 00:07:59.140 "superblock": false, 00:07:59.140 "num_base_bdevs": 2, 00:07:59.140 "num_base_bdevs_discovered": 2, 00:07:59.140 "num_base_bdevs_operational": 2, 00:07:59.140 "base_bdevs_list": [ 00:07:59.140 { 00:07:59.140 "name": "BaseBdev1", 00:07:59.140 "uuid": "1a2b52dd-94dd-443b-94c5-00eda54a5293", 00:07:59.140 "is_configured": true, 00:07:59.140 "data_offset": 0, 00:07:59.140 "data_size": 65536 00:07:59.140 }, 00:07:59.140 { 00:07:59.140 "name": "BaseBdev2", 00:07:59.140 "uuid": "124a43bc-7072-466d-8b88-e40a4fd9e721", 00:07:59.140 "is_configured": true, 00:07:59.140 "data_offset": 0, 00:07:59.140 "data_size": 65536 00:07:59.140 } 00:07:59.140 ] 00:07:59.140 } 00:07:59.140 } 00:07:59.140 }' 00:07:59.140 15:15:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:59.140 15:15:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:59.140 BaseBdev2' 00:07:59.140 15:15:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:59.140 15:15:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:59.140 15:15:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:59.140 15:15:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:59.140 15:15:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:59.140 15:15:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.140 15:15:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.140 15:15:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.140 15:15:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:59.140 15:15:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:59.140 15:15:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:59.140 15:15:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:59.140 15:15:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:59.140 15:15:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.140 15:15:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.140 15:15:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.140 15:15:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:59.140 15:15:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:59.141 15:15:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:59.141 15:15:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.141 15:15:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.141 [2024-11-20 15:15:45.587494] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:59.400 15:15:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.400 15:15:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:59.400 15:15:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:07:59.400 15:15:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:59.400 15:15:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:07:59.400 15:15:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:07:59.400 15:15:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:07:59.400 15:15:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:59.400 15:15:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:59.400 15:15:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:59.400 15:15:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:59.400 15:15:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:59.400 15:15:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:59.400 15:15:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:59.400 15:15:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:59.400 15:15:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:59.400 15:15:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:59.400 15:15:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.400 15:15:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:59.400 15:15:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.400 15:15:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.400 15:15:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:59.400 "name": "Existed_Raid", 00:07:59.400 "uuid": "b76176d9-5d34-49ff-88e9-ed188cbb453d", 00:07:59.400 "strip_size_kb": 0, 00:07:59.400 "state": "online", 00:07:59.400 "raid_level": "raid1", 00:07:59.400 "superblock": false, 00:07:59.400 "num_base_bdevs": 2, 00:07:59.400 "num_base_bdevs_discovered": 1, 00:07:59.400 "num_base_bdevs_operational": 1, 00:07:59.400 "base_bdevs_list": [ 00:07:59.400 { 00:07:59.400 "name": null, 00:07:59.400 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:59.400 "is_configured": false, 00:07:59.400 "data_offset": 0, 00:07:59.400 "data_size": 65536 00:07:59.400 }, 00:07:59.400 { 00:07:59.400 "name": "BaseBdev2", 00:07:59.400 "uuid": "124a43bc-7072-466d-8b88-e40a4fd9e721", 00:07:59.400 "is_configured": true, 00:07:59.400 "data_offset": 0, 00:07:59.400 "data_size": 65536 00:07:59.400 } 00:07:59.400 ] 00:07:59.400 }' 00:07:59.400 15:15:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:59.400 15:15:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.659 15:15:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:59.659 15:15:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:59.659 15:15:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:59.659 15:15:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:59.659 15:15:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.659 15:15:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.659 15:15:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.659 15:15:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:59.659 15:15:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:59.659 15:15:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:59.659 15:15:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.659 15:15:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.659 [2024-11-20 15:15:46.123458] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:59.659 [2024-11-20 15:15:46.123714] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:59.918 [2024-11-20 15:15:46.221772] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:59.918 [2024-11-20 15:15:46.221832] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:59.918 [2024-11-20 15:15:46.221847] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:07:59.918 15:15:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.918 15:15:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:59.918 15:15:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:59.918 15:15:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:59.918 15:15:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:59.918 15:15:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.918 15:15:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.918 15:15:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.918 15:15:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:59.918 15:15:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:59.918 15:15:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:59.918 15:15:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 62575 00:07:59.918 15:15:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 62575 ']' 00:07:59.918 15:15:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 62575 00:07:59.918 15:15:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:07:59.918 15:15:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:59.918 15:15:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62575 00:07:59.918 killing process with pid 62575 00:07:59.918 15:15:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:59.918 15:15:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:59.918 15:15:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62575' 00:07:59.918 15:15:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 62575 00:07:59.918 [2024-11-20 15:15:46.306709] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:59.918 15:15:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 62575 00:07:59.918 [2024-11-20 15:15:46.324034] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:01.297 15:15:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:08:01.297 00:08:01.297 real 0m5.046s 00:08:01.297 user 0m7.234s 00:08:01.297 sys 0m0.927s 00:08:01.297 15:15:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:01.297 ************************************ 00:08:01.297 END TEST raid_state_function_test 00:08:01.297 ************************************ 00:08:01.297 15:15:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.297 15:15:47 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:08:01.297 15:15:47 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:01.297 15:15:47 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:01.297 15:15:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:01.297 ************************************ 00:08:01.297 START TEST raid_state_function_test_sb 00:08:01.297 ************************************ 00:08:01.297 15:15:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:08:01.297 15:15:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:08:01.297 15:15:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:08:01.297 15:15:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:08:01.297 15:15:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:01.297 15:15:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:01.297 15:15:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:01.297 15:15:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:01.297 15:15:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:01.297 15:15:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:01.297 15:15:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:01.297 15:15:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:01.297 15:15:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:01.297 15:15:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:01.297 15:15:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:01.297 15:15:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:01.297 15:15:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:01.297 15:15:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:01.297 15:15:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:01.297 15:15:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:08:01.297 15:15:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:08:01.297 15:15:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:08:01.297 15:15:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:08:01.297 15:15:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=62828 00:08:01.297 15:15:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 62828' 00:08:01.297 Process raid pid: 62828 00:08:01.297 15:15:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:01.297 15:15:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 62828 00:08:01.297 15:15:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 62828 ']' 00:08:01.297 15:15:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:01.297 15:15:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:01.297 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:01.297 15:15:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:01.297 15:15:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:01.297 15:15:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:01.297 [2024-11-20 15:15:47.666128] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:08:01.297 [2024-11-20 15:15:47.666270] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:01.556 [2024-11-20 15:15:47.859755] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:01.556 [2024-11-20 15:15:47.977410] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:01.814 [2024-11-20 15:15:48.191357] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:01.814 [2024-11-20 15:15:48.191408] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:02.073 15:15:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:02.073 15:15:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:08:02.073 15:15:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:02.073 15:15:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.073 15:15:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:02.073 [2024-11-20 15:15:48.543496] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:02.073 [2024-11-20 15:15:48.543553] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:02.073 [2024-11-20 15:15:48.543565] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:02.074 [2024-11-20 15:15:48.543580] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:02.074 15:15:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.074 15:15:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:02.074 15:15:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:02.074 15:15:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:02.074 15:15:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:02.074 15:15:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:02.074 15:15:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:02.074 15:15:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:02.074 15:15:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:02.074 15:15:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:02.074 15:15:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:02.074 15:15:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:02.074 15:15:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:02.333 15:15:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.333 15:15:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:02.333 15:15:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.333 15:15:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:02.333 "name": "Existed_Raid", 00:08:02.333 "uuid": "e8df6236-4f63-4603-858e-475c69a50bd4", 00:08:02.333 "strip_size_kb": 0, 00:08:02.333 "state": "configuring", 00:08:02.333 "raid_level": "raid1", 00:08:02.333 "superblock": true, 00:08:02.333 "num_base_bdevs": 2, 00:08:02.333 "num_base_bdevs_discovered": 0, 00:08:02.333 "num_base_bdevs_operational": 2, 00:08:02.333 "base_bdevs_list": [ 00:08:02.333 { 00:08:02.333 "name": "BaseBdev1", 00:08:02.333 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:02.333 "is_configured": false, 00:08:02.333 "data_offset": 0, 00:08:02.333 "data_size": 0 00:08:02.333 }, 00:08:02.333 { 00:08:02.333 "name": "BaseBdev2", 00:08:02.333 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:02.333 "is_configured": false, 00:08:02.333 "data_offset": 0, 00:08:02.333 "data_size": 0 00:08:02.333 } 00:08:02.333 ] 00:08:02.333 }' 00:08:02.333 15:15:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:02.333 15:15:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:02.592 15:15:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:02.592 15:15:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.592 15:15:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:02.592 [2024-11-20 15:15:48.975495] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:02.592 [2024-11-20 15:15:48.975540] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:02.592 15:15:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.592 15:15:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:02.592 15:15:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.593 15:15:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:02.593 [2024-11-20 15:15:48.987457] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:02.593 [2024-11-20 15:15:48.987502] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:02.593 [2024-11-20 15:15:48.987514] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:02.593 [2024-11-20 15:15:48.987531] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:02.593 15:15:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.593 15:15:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:02.593 15:15:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.593 15:15:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:02.593 [2024-11-20 15:15:49.040497] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:02.593 BaseBdev1 00:08:02.593 15:15:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.593 15:15:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:02.593 15:15:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:02.593 15:15:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:02.593 15:15:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:02.593 15:15:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:02.593 15:15:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:02.593 15:15:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:02.593 15:15:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.593 15:15:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:02.593 15:15:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.593 15:15:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:02.593 15:15:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.593 15:15:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:02.593 [ 00:08:02.593 { 00:08:02.593 "name": "BaseBdev1", 00:08:02.593 "aliases": [ 00:08:02.593 "1995bf09-0f0e-4b1f-8dd5-07b61401a516" 00:08:02.593 ], 00:08:02.593 "product_name": "Malloc disk", 00:08:02.593 "block_size": 512, 00:08:02.593 "num_blocks": 65536, 00:08:02.593 "uuid": "1995bf09-0f0e-4b1f-8dd5-07b61401a516", 00:08:02.593 "assigned_rate_limits": { 00:08:02.593 "rw_ios_per_sec": 0, 00:08:02.593 "rw_mbytes_per_sec": 0, 00:08:02.593 "r_mbytes_per_sec": 0, 00:08:02.593 "w_mbytes_per_sec": 0 00:08:02.593 }, 00:08:02.593 "claimed": true, 00:08:02.593 "claim_type": "exclusive_write", 00:08:02.593 "zoned": false, 00:08:02.593 "supported_io_types": { 00:08:02.593 "read": true, 00:08:02.593 "write": true, 00:08:02.593 "unmap": true, 00:08:02.593 "flush": true, 00:08:02.593 "reset": true, 00:08:02.593 "nvme_admin": false, 00:08:02.593 "nvme_io": false, 00:08:02.593 "nvme_io_md": false, 00:08:02.593 "write_zeroes": true, 00:08:02.593 "zcopy": true, 00:08:02.593 "get_zone_info": false, 00:08:02.593 "zone_management": false, 00:08:02.593 "zone_append": false, 00:08:02.593 "compare": false, 00:08:02.593 "compare_and_write": false, 00:08:02.593 "abort": true, 00:08:02.593 "seek_hole": false, 00:08:02.593 "seek_data": false, 00:08:02.593 "copy": true, 00:08:02.593 "nvme_iov_md": false 00:08:02.593 }, 00:08:02.593 "memory_domains": [ 00:08:02.593 { 00:08:02.593 "dma_device_id": "system", 00:08:02.593 "dma_device_type": 1 00:08:02.864 }, 00:08:02.864 { 00:08:02.864 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:02.864 "dma_device_type": 2 00:08:02.864 } 00:08:02.864 ], 00:08:02.864 "driver_specific": {} 00:08:02.864 } 00:08:02.864 ] 00:08:02.864 15:15:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.864 15:15:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:02.864 15:15:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:02.864 15:15:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:02.864 15:15:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:02.865 15:15:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:02.865 15:15:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:02.865 15:15:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:02.865 15:15:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:02.865 15:15:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:02.865 15:15:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:02.865 15:15:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:02.865 15:15:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:02.865 15:15:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:02.865 15:15:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.865 15:15:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:02.865 15:15:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.865 15:15:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:02.865 "name": "Existed_Raid", 00:08:02.865 "uuid": "29782732-c193-4aee-9493-087bc379a4a0", 00:08:02.865 "strip_size_kb": 0, 00:08:02.865 "state": "configuring", 00:08:02.865 "raid_level": "raid1", 00:08:02.865 "superblock": true, 00:08:02.865 "num_base_bdevs": 2, 00:08:02.865 "num_base_bdevs_discovered": 1, 00:08:02.865 "num_base_bdevs_operational": 2, 00:08:02.865 "base_bdevs_list": [ 00:08:02.865 { 00:08:02.865 "name": "BaseBdev1", 00:08:02.865 "uuid": "1995bf09-0f0e-4b1f-8dd5-07b61401a516", 00:08:02.865 "is_configured": true, 00:08:02.865 "data_offset": 2048, 00:08:02.865 "data_size": 63488 00:08:02.865 }, 00:08:02.865 { 00:08:02.865 "name": "BaseBdev2", 00:08:02.865 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:02.865 "is_configured": false, 00:08:02.865 "data_offset": 0, 00:08:02.865 "data_size": 0 00:08:02.865 } 00:08:02.865 ] 00:08:02.865 }' 00:08:02.865 15:15:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:02.865 15:15:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:03.148 15:15:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:03.148 15:15:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.148 15:15:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:03.148 [2024-11-20 15:15:49.479958] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:03.148 [2024-11-20 15:15:49.480030] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:03.148 15:15:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.148 15:15:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:03.148 15:15:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.148 15:15:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:03.148 [2024-11-20 15:15:49.488008] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:03.148 [2024-11-20 15:15:49.490409] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:03.148 [2024-11-20 15:15:49.490457] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:03.148 15:15:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.148 15:15:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:03.148 15:15:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:03.148 15:15:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:03.148 15:15:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:03.148 15:15:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:03.148 15:15:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:03.148 15:15:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:03.148 15:15:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:03.148 15:15:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:03.148 15:15:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:03.148 15:15:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:03.148 15:15:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:03.148 15:15:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:03.148 15:15:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:03.148 15:15:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.148 15:15:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:03.148 15:15:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.148 15:15:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:03.148 "name": "Existed_Raid", 00:08:03.148 "uuid": "1341ae9f-f9ad-420e-837f-f2ba91c03a3e", 00:08:03.148 "strip_size_kb": 0, 00:08:03.148 "state": "configuring", 00:08:03.148 "raid_level": "raid1", 00:08:03.148 "superblock": true, 00:08:03.148 "num_base_bdevs": 2, 00:08:03.148 "num_base_bdevs_discovered": 1, 00:08:03.148 "num_base_bdevs_operational": 2, 00:08:03.148 "base_bdevs_list": [ 00:08:03.148 { 00:08:03.148 "name": "BaseBdev1", 00:08:03.148 "uuid": "1995bf09-0f0e-4b1f-8dd5-07b61401a516", 00:08:03.148 "is_configured": true, 00:08:03.148 "data_offset": 2048, 00:08:03.148 "data_size": 63488 00:08:03.148 }, 00:08:03.148 { 00:08:03.148 "name": "BaseBdev2", 00:08:03.148 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:03.148 "is_configured": false, 00:08:03.148 "data_offset": 0, 00:08:03.148 "data_size": 0 00:08:03.148 } 00:08:03.148 ] 00:08:03.148 }' 00:08:03.148 15:15:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:03.148 15:15:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:03.716 15:15:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:03.716 15:15:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.716 15:15:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:03.716 [2024-11-20 15:15:49.950981] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:03.716 [2024-11-20 15:15:49.951308] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:03.716 [2024-11-20 15:15:49.951329] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:03.716 [2024-11-20 15:15:49.951669] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:03.716 [2024-11-20 15:15:49.951884] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:03.716 [2024-11-20 15:15:49.951918] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:03.716 BaseBdev2 00:08:03.716 [2024-11-20 15:15:49.952100] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:03.716 15:15:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.716 15:15:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:03.716 15:15:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:03.716 15:15:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:03.716 15:15:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:03.716 15:15:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:03.716 15:15:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:03.716 15:15:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:03.716 15:15:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.716 15:15:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:03.716 15:15:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.716 15:15:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:03.716 15:15:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.716 15:15:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:03.716 [ 00:08:03.716 { 00:08:03.716 "name": "BaseBdev2", 00:08:03.716 "aliases": [ 00:08:03.716 "c30e84e6-5e1d-4313-ad1c-9f2bff53804e" 00:08:03.716 ], 00:08:03.716 "product_name": "Malloc disk", 00:08:03.716 "block_size": 512, 00:08:03.716 "num_blocks": 65536, 00:08:03.716 "uuid": "c30e84e6-5e1d-4313-ad1c-9f2bff53804e", 00:08:03.716 "assigned_rate_limits": { 00:08:03.716 "rw_ios_per_sec": 0, 00:08:03.716 "rw_mbytes_per_sec": 0, 00:08:03.716 "r_mbytes_per_sec": 0, 00:08:03.716 "w_mbytes_per_sec": 0 00:08:03.716 }, 00:08:03.716 "claimed": true, 00:08:03.716 "claim_type": "exclusive_write", 00:08:03.716 "zoned": false, 00:08:03.716 "supported_io_types": { 00:08:03.716 "read": true, 00:08:03.716 "write": true, 00:08:03.716 "unmap": true, 00:08:03.716 "flush": true, 00:08:03.716 "reset": true, 00:08:03.716 "nvme_admin": false, 00:08:03.716 "nvme_io": false, 00:08:03.716 "nvme_io_md": false, 00:08:03.716 "write_zeroes": true, 00:08:03.716 "zcopy": true, 00:08:03.716 "get_zone_info": false, 00:08:03.716 "zone_management": false, 00:08:03.716 "zone_append": false, 00:08:03.716 "compare": false, 00:08:03.716 "compare_and_write": false, 00:08:03.716 "abort": true, 00:08:03.716 "seek_hole": false, 00:08:03.716 "seek_data": false, 00:08:03.716 "copy": true, 00:08:03.716 "nvme_iov_md": false 00:08:03.716 }, 00:08:03.716 "memory_domains": [ 00:08:03.716 { 00:08:03.716 "dma_device_id": "system", 00:08:03.716 "dma_device_type": 1 00:08:03.716 }, 00:08:03.716 { 00:08:03.716 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:03.716 "dma_device_type": 2 00:08:03.716 } 00:08:03.716 ], 00:08:03.716 "driver_specific": {} 00:08:03.716 } 00:08:03.716 ] 00:08:03.716 15:15:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.716 15:15:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:03.716 15:15:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:03.716 15:15:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:03.716 15:15:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:08:03.716 15:15:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:03.716 15:15:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:03.716 15:15:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:03.716 15:15:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:03.716 15:15:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:03.716 15:15:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:03.716 15:15:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:03.716 15:15:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:03.716 15:15:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:03.716 15:15:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:03.716 15:15:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.716 15:15:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:03.716 15:15:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:03.716 15:15:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.716 15:15:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:03.716 "name": "Existed_Raid", 00:08:03.716 "uuid": "1341ae9f-f9ad-420e-837f-f2ba91c03a3e", 00:08:03.716 "strip_size_kb": 0, 00:08:03.716 "state": "online", 00:08:03.716 "raid_level": "raid1", 00:08:03.716 "superblock": true, 00:08:03.716 "num_base_bdevs": 2, 00:08:03.716 "num_base_bdevs_discovered": 2, 00:08:03.716 "num_base_bdevs_operational": 2, 00:08:03.716 "base_bdevs_list": [ 00:08:03.716 { 00:08:03.716 "name": "BaseBdev1", 00:08:03.716 "uuid": "1995bf09-0f0e-4b1f-8dd5-07b61401a516", 00:08:03.716 "is_configured": true, 00:08:03.716 "data_offset": 2048, 00:08:03.716 "data_size": 63488 00:08:03.716 }, 00:08:03.716 { 00:08:03.716 "name": "BaseBdev2", 00:08:03.716 "uuid": "c30e84e6-5e1d-4313-ad1c-9f2bff53804e", 00:08:03.716 "is_configured": true, 00:08:03.716 "data_offset": 2048, 00:08:03.716 "data_size": 63488 00:08:03.716 } 00:08:03.716 ] 00:08:03.716 }' 00:08:03.716 15:15:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:03.717 15:15:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:03.975 15:15:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:03.975 15:15:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:03.975 15:15:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:03.975 15:15:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:03.975 15:15:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:03.975 15:15:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:03.975 15:15:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:03.975 15:15:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.975 15:15:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:03.975 15:15:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:03.975 [2024-11-20 15:15:50.446786] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:04.234 15:15:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.234 15:15:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:04.234 "name": "Existed_Raid", 00:08:04.234 "aliases": [ 00:08:04.234 "1341ae9f-f9ad-420e-837f-f2ba91c03a3e" 00:08:04.234 ], 00:08:04.234 "product_name": "Raid Volume", 00:08:04.234 "block_size": 512, 00:08:04.234 "num_blocks": 63488, 00:08:04.234 "uuid": "1341ae9f-f9ad-420e-837f-f2ba91c03a3e", 00:08:04.234 "assigned_rate_limits": { 00:08:04.234 "rw_ios_per_sec": 0, 00:08:04.234 "rw_mbytes_per_sec": 0, 00:08:04.234 "r_mbytes_per_sec": 0, 00:08:04.234 "w_mbytes_per_sec": 0 00:08:04.234 }, 00:08:04.234 "claimed": false, 00:08:04.234 "zoned": false, 00:08:04.234 "supported_io_types": { 00:08:04.234 "read": true, 00:08:04.234 "write": true, 00:08:04.234 "unmap": false, 00:08:04.234 "flush": false, 00:08:04.234 "reset": true, 00:08:04.234 "nvme_admin": false, 00:08:04.234 "nvme_io": false, 00:08:04.234 "nvme_io_md": false, 00:08:04.234 "write_zeroes": true, 00:08:04.234 "zcopy": false, 00:08:04.234 "get_zone_info": false, 00:08:04.234 "zone_management": false, 00:08:04.234 "zone_append": false, 00:08:04.234 "compare": false, 00:08:04.234 "compare_and_write": false, 00:08:04.234 "abort": false, 00:08:04.234 "seek_hole": false, 00:08:04.234 "seek_data": false, 00:08:04.234 "copy": false, 00:08:04.234 "nvme_iov_md": false 00:08:04.234 }, 00:08:04.234 "memory_domains": [ 00:08:04.234 { 00:08:04.234 "dma_device_id": "system", 00:08:04.234 "dma_device_type": 1 00:08:04.234 }, 00:08:04.234 { 00:08:04.234 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:04.234 "dma_device_type": 2 00:08:04.234 }, 00:08:04.234 { 00:08:04.234 "dma_device_id": "system", 00:08:04.234 "dma_device_type": 1 00:08:04.234 }, 00:08:04.234 { 00:08:04.234 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:04.234 "dma_device_type": 2 00:08:04.234 } 00:08:04.234 ], 00:08:04.234 "driver_specific": { 00:08:04.234 "raid": { 00:08:04.234 "uuid": "1341ae9f-f9ad-420e-837f-f2ba91c03a3e", 00:08:04.234 "strip_size_kb": 0, 00:08:04.234 "state": "online", 00:08:04.234 "raid_level": "raid1", 00:08:04.234 "superblock": true, 00:08:04.234 "num_base_bdevs": 2, 00:08:04.234 "num_base_bdevs_discovered": 2, 00:08:04.234 "num_base_bdevs_operational": 2, 00:08:04.234 "base_bdevs_list": [ 00:08:04.234 { 00:08:04.234 "name": "BaseBdev1", 00:08:04.234 "uuid": "1995bf09-0f0e-4b1f-8dd5-07b61401a516", 00:08:04.234 "is_configured": true, 00:08:04.234 "data_offset": 2048, 00:08:04.234 "data_size": 63488 00:08:04.234 }, 00:08:04.234 { 00:08:04.234 "name": "BaseBdev2", 00:08:04.234 "uuid": "c30e84e6-5e1d-4313-ad1c-9f2bff53804e", 00:08:04.234 "is_configured": true, 00:08:04.234 "data_offset": 2048, 00:08:04.234 "data_size": 63488 00:08:04.234 } 00:08:04.234 ] 00:08:04.234 } 00:08:04.234 } 00:08:04.234 }' 00:08:04.234 15:15:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:04.234 15:15:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:04.234 BaseBdev2' 00:08:04.234 15:15:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:04.234 15:15:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:04.234 15:15:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:04.234 15:15:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:04.234 15:15:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:04.234 15:15:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.234 15:15:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:04.234 15:15:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.234 15:15:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:04.234 15:15:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:04.234 15:15:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:04.234 15:15:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:04.234 15:15:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:04.234 15:15:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.234 15:15:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:04.234 15:15:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.234 15:15:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:04.234 15:15:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:04.234 15:15:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:04.234 15:15:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.234 15:15:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:04.234 [2024-11-20 15:15:50.666224] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:04.492 15:15:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.492 15:15:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:04.492 15:15:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:08:04.492 15:15:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:04.492 15:15:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:08:04.492 15:15:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:08:04.492 15:15:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:08:04.492 15:15:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:04.492 15:15:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:04.492 15:15:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:04.492 15:15:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:04.492 15:15:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:04.492 15:15:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:04.492 15:15:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:04.492 15:15:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:04.492 15:15:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:04.492 15:15:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:04.492 15:15:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.492 15:15:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:04.492 15:15:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:04.492 15:15:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.492 15:15:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:04.492 "name": "Existed_Raid", 00:08:04.492 "uuid": "1341ae9f-f9ad-420e-837f-f2ba91c03a3e", 00:08:04.492 "strip_size_kb": 0, 00:08:04.492 "state": "online", 00:08:04.492 "raid_level": "raid1", 00:08:04.492 "superblock": true, 00:08:04.492 "num_base_bdevs": 2, 00:08:04.492 "num_base_bdevs_discovered": 1, 00:08:04.492 "num_base_bdevs_operational": 1, 00:08:04.492 "base_bdevs_list": [ 00:08:04.492 { 00:08:04.492 "name": null, 00:08:04.492 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:04.492 "is_configured": false, 00:08:04.492 "data_offset": 0, 00:08:04.492 "data_size": 63488 00:08:04.492 }, 00:08:04.492 { 00:08:04.492 "name": "BaseBdev2", 00:08:04.492 "uuid": "c30e84e6-5e1d-4313-ad1c-9f2bff53804e", 00:08:04.492 "is_configured": true, 00:08:04.493 "data_offset": 2048, 00:08:04.493 "data_size": 63488 00:08:04.493 } 00:08:04.493 ] 00:08:04.493 }' 00:08:04.493 15:15:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:04.493 15:15:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:04.750 15:15:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:04.750 15:15:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:04.750 15:15:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:04.750 15:15:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.750 15:15:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:04.750 15:15:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:04.750 15:15:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.750 15:15:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:04.750 15:15:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:04.750 15:15:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:04.750 15:15:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.750 15:15:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:04.750 [2024-11-20 15:15:51.203867] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:04.750 [2024-11-20 15:15:51.204010] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:05.009 [2024-11-20 15:15:51.306524] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:05.009 [2024-11-20 15:15:51.306628] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:05.009 [2024-11-20 15:15:51.306679] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:05.009 15:15:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.009 15:15:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:05.009 15:15:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:05.009 15:15:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:05.009 15:15:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:05.009 15:15:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.009 15:15:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:05.009 15:15:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.009 15:15:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:05.009 15:15:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:05.009 15:15:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:08:05.009 15:15:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 62828 00:08:05.009 15:15:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 62828 ']' 00:08:05.009 15:15:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 62828 00:08:05.009 15:15:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:08:05.009 15:15:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:05.009 15:15:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62828 00:08:05.009 15:15:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:05.009 15:15:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:05.009 15:15:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62828' 00:08:05.009 killing process with pid 62828 00:08:05.009 15:15:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 62828 00:08:05.009 15:15:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 62828 00:08:05.009 [2024-11-20 15:15:51.386121] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:05.009 [2024-11-20 15:15:51.404053] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:06.384 15:15:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:08:06.384 00:08:06.384 real 0m5.046s 00:08:06.384 user 0m7.228s 00:08:06.384 sys 0m0.839s 00:08:06.384 15:15:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:06.384 15:15:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:06.384 ************************************ 00:08:06.384 END TEST raid_state_function_test_sb 00:08:06.384 ************************************ 00:08:06.384 15:15:52 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:08:06.384 15:15:52 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:06.384 15:15:52 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:06.384 15:15:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:06.384 ************************************ 00:08:06.384 START TEST raid_superblock_test 00:08:06.384 ************************************ 00:08:06.384 15:15:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:08:06.384 15:15:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:08:06.384 15:15:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:08:06.384 15:15:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:08:06.384 15:15:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:08:06.384 15:15:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:08:06.384 15:15:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:08:06.384 15:15:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:08:06.384 15:15:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:08:06.384 15:15:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:08:06.384 15:15:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:08:06.384 15:15:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:08:06.384 15:15:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:08:06.384 15:15:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:08:06.384 15:15:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:08:06.384 15:15:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:08:06.384 15:15:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=63080 00:08:06.384 15:15:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 63080 00:08:06.384 15:15:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:08:06.384 15:15:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 63080 ']' 00:08:06.384 15:15:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:06.384 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:06.384 15:15:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:06.384 15:15:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:06.384 15:15:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:06.384 15:15:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.384 [2024-11-20 15:15:52.797787] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:08:06.384 [2024-11-20 15:15:52.798028] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63080 ] 00:08:06.643 [2024-11-20 15:15:52.997740] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:06.902 [2024-11-20 15:15:53.149082] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:06.902 [2024-11-20 15:15:53.380064] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:06.902 [2024-11-20 15:15:53.380122] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:07.470 15:15:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:07.470 15:15:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:08:07.470 15:15:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:08:07.470 15:15:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:07.470 15:15:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:08:07.470 15:15:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:08:07.470 15:15:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:08:07.470 15:15:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:07.470 15:15:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:07.470 15:15:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:07.470 15:15:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:08:07.470 15:15:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.470 15:15:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.470 malloc1 00:08:07.470 15:15:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.470 15:15:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:07.470 15:15:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.470 15:15:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.470 [2024-11-20 15:15:53.700335] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:07.470 [2024-11-20 15:15:53.700409] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:07.470 [2024-11-20 15:15:53.700436] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:07.470 [2024-11-20 15:15:53.700449] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:07.470 [2024-11-20 15:15:53.702891] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:07.470 [2024-11-20 15:15:53.702929] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:07.470 pt1 00:08:07.470 15:15:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.470 15:15:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:07.470 15:15:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:07.470 15:15:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:08:07.470 15:15:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:08:07.470 15:15:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:08:07.470 15:15:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:07.470 15:15:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:07.470 15:15:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:07.470 15:15:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:08:07.470 15:15:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.470 15:15:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.470 malloc2 00:08:07.470 15:15:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.470 15:15:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:07.470 15:15:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.470 15:15:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.470 [2024-11-20 15:15:53.756742] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:07.470 [2024-11-20 15:15:53.756813] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:07.470 [2024-11-20 15:15:53.756844] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:08:07.470 [2024-11-20 15:15:53.756856] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:07.470 [2024-11-20 15:15:53.759390] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:07.470 [2024-11-20 15:15:53.759429] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:07.470 pt2 00:08:07.470 15:15:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.470 15:15:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:07.470 15:15:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:07.470 15:15:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:08:07.470 15:15:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.470 15:15:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.470 [2024-11-20 15:15:53.768789] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:07.470 [2024-11-20 15:15:53.770927] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:07.470 [2024-11-20 15:15:53.771100] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:08:07.470 [2024-11-20 15:15:53.771120] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:07.470 [2024-11-20 15:15:53.771442] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:07.470 [2024-11-20 15:15:53.771601] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:08:07.470 [2024-11-20 15:15:53.771620] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:08:07.470 [2024-11-20 15:15:53.771810] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:07.470 15:15:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.470 15:15:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:07.470 15:15:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:07.470 15:15:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:07.470 15:15:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:07.471 15:15:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:07.471 15:15:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:07.471 15:15:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:07.471 15:15:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:07.471 15:15:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:07.471 15:15:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:07.471 15:15:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:07.471 15:15:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:07.471 15:15:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.471 15:15:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.471 15:15:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.471 15:15:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:07.471 "name": "raid_bdev1", 00:08:07.471 "uuid": "d766bc63-e7a9-4760-9dd9-1d94b41fccff", 00:08:07.471 "strip_size_kb": 0, 00:08:07.471 "state": "online", 00:08:07.471 "raid_level": "raid1", 00:08:07.471 "superblock": true, 00:08:07.471 "num_base_bdevs": 2, 00:08:07.471 "num_base_bdevs_discovered": 2, 00:08:07.471 "num_base_bdevs_operational": 2, 00:08:07.471 "base_bdevs_list": [ 00:08:07.471 { 00:08:07.471 "name": "pt1", 00:08:07.471 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:07.471 "is_configured": true, 00:08:07.471 "data_offset": 2048, 00:08:07.471 "data_size": 63488 00:08:07.471 }, 00:08:07.471 { 00:08:07.471 "name": "pt2", 00:08:07.471 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:07.471 "is_configured": true, 00:08:07.471 "data_offset": 2048, 00:08:07.471 "data_size": 63488 00:08:07.471 } 00:08:07.471 ] 00:08:07.471 }' 00:08:07.471 15:15:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:07.471 15:15:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.752 15:15:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:08:07.752 15:15:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:07.752 15:15:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:07.752 15:15:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:07.752 15:15:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:07.752 15:15:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:07.752 15:15:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:07.752 15:15:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.752 15:15:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:07.752 15:15:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.752 [2024-11-20 15:15:54.200402] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:07.752 15:15:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.011 15:15:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:08.011 "name": "raid_bdev1", 00:08:08.011 "aliases": [ 00:08:08.011 "d766bc63-e7a9-4760-9dd9-1d94b41fccff" 00:08:08.011 ], 00:08:08.011 "product_name": "Raid Volume", 00:08:08.011 "block_size": 512, 00:08:08.011 "num_blocks": 63488, 00:08:08.011 "uuid": "d766bc63-e7a9-4760-9dd9-1d94b41fccff", 00:08:08.011 "assigned_rate_limits": { 00:08:08.011 "rw_ios_per_sec": 0, 00:08:08.011 "rw_mbytes_per_sec": 0, 00:08:08.011 "r_mbytes_per_sec": 0, 00:08:08.011 "w_mbytes_per_sec": 0 00:08:08.011 }, 00:08:08.012 "claimed": false, 00:08:08.012 "zoned": false, 00:08:08.012 "supported_io_types": { 00:08:08.012 "read": true, 00:08:08.012 "write": true, 00:08:08.012 "unmap": false, 00:08:08.012 "flush": false, 00:08:08.012 "reset": true, 00:08:08.012 "nvme_admin": false, 00:08:08.012 "nvme_io": false, 00:08:08.012 "nvme_io_md": false, 00:08:08.012 "write_zeroes": true, 00:08:08.012 "zcopy": false, 00:08:08.012 "get_zone_info": false, 00:08:08.012 "zone_management": false, 00:08:08.012 "zone_append": false, 00:08:08.012 "compare": false, 00:08:08.012 "compare_and_write": false, 00:08:08.012 "abort": false, 00:08:08.012 "seek_hole": false, 00:08:08.012 "seek_data": false, 00:08:08.012 "copy": false, 00:08:08.012 "nvme_iov_md": false 00:08:08.012 }, 00:08:08.012 "memory_domains": [ 00:08:08.012 { 00:08:08.012 "dma_device_id": "system", 00:08:08.012 "dma_device_type": 1 00:08:08.012 }, 00:08:08.012 { 00:08:08.012 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:08.012 "dma_device_type": 2 00:08:08.012 }, 00:08:08.012 { 00:08:08.012 "dma_device_id": "system", 00:08:08.012 "dma_device_type": 1 00:08:08.012 }, 00:08:08.012 { 00:08:08.012 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:08.012 "dma_device_type": 2 00:08:08.012 } 00:08:08.012 ], 00:08:08.012 "driver_specific": { 00:08:08.012 "raid": { 00:08:08.012 "uuid": "d766bc63-e7a9-4760-9dd9-1d94b41fccff", 00:08:08.012 "strip_size_kb": 0, 00:08:08.012 "state": "online", 00:08:08.012 "raid_level": "raid1", 00:08:08.012 "superblock": true, 00:08:08.012 "num_base_bdevs": 2, 00:08:08.012 "num_base_bdevs_discovered": 2, 00:08:08.012 "num_base_bdevs_operational": 2, 00:08:08.012 "base_bdevs_list": [ 00:08:08.012 { 00:08:08.012 "name": "pt1", 00:08:08.012 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:08.012 "is_configured": true, 00:08:08.012 "data_offset": 2048, 00:08:08.012 "data_size": 63488 00:08:08.012 }, 00:08:08.012 { 00:08:08.012 "name": "pt2", 00:08:08.012 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:08.012 "is_configured": true, 00:08:08.012 "data_offset": 2048, 00:08:08.012 "data_size": 63488 00:08:08.012 } 00:08:08.012 ] 00:08:08.012 } 00:08:08.012 } 00:08:08.012 }' 00:08:08.012 15:15:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:08.012 15:15:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:08.012 pt2' 00:08:08.012 15:15:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:08.012 15:15:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:08.012 15:15:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:08.012 15:15:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:08.012 15:15:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:08.012 15:15:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.012 15:15:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.012 15:15:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.012 15:15:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:08.012 15:15:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:08.012 15:15:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:08.012 15:15:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:08.012 15:15:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.012 15:15:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.012 15:15:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:08.012 15:15:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.012 15:15:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:08.012 15:15:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:08.012 15:15:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:08.012 15:15:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:08:08.012 15:15:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.012 15:15:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.012 [2024-11-20 15:15:54.412098] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:08.012 15:15:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.012 15:15:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=d766bc63-e7a9-4760-9dd9-1d94b41fccff 00:08:08.012 15:15:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z d766bc63-e7a9-4760-9dd9-1d94b41fccff ']' 00:08:08.012 15:15:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:08.012 15:15:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.012 15:15:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.012 [2024-11-20 15:15:54.455790] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:08.012 [2024-11-20 15:15:54.455828] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:08.012 [2024-11-20 15:15:54.455926] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:08.012 [2024-11-20 15:15:54.455988] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:08.012 [2024-11-20 15:15:54.456003] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:08:08.012 15:15:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.012 15:15:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:08.012 15:15:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:08:08.012 15:15:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.012 15:15:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.012 15:15:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.273 15:15:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:08:08.273 15:15:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:08:08.273 15:15:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:08.273 15:15:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:08:08.273 15:15:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.273 15:15:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.273 15:15:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.273 15:15:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:08.273 15:15:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:08:08.273 15:15:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.273 15:15:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.273 15:15:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.273 15:15:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:08:08.273 15:15:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.273 15:15:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.273 15:15:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:08:08.273 15:15:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.273 15:15:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:08:08.273 15:15:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:08.273 15:15:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:08:08.273 15:15:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:08.273 15:15:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:08:08.273 15:15:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:08.273 15:15:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:08:08.273 15:15:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:08.273 15:15:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:08.273 15:15:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.273 15:15:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.273 [2024-11-20 15:15:54.583636] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:08:08.273 [2024-11-20 15:15:54.585878] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:08:08.273 [2024-11-20 15:15:54.585947] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:08:08.273 [2024-11-20 15:15:54.586003] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:08:08.273 [2024-11-20 15:15:54.586022] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:08.273 [2024-11-20 15:15:54.586036] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:08:08.273 request: 00:08:08.273 { 00:08:08.273 "name": "raid_bdev1", 00:08:08.273 "raid_level": "raid1", 00:08:08.273 "base_bdevs": [ 00:08:08.273 "malloc1", 00:08:08.273 "malloc2" 00:08:08.273 ], 00:08:08.273 "superblock": false, 00:08:08.273 "method": "bdev_raid_create", 00:08:08.273 "req_id": 1 00:08:08.273 } 00:08:08.273 Got JSON-RPC error response 00:08:08.273 response: 00:08:08.273 { 00:08:08.273 "code": -17, 00:08:08.273 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:08:08.273 } 00:08:08.273 15:15:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:08:08.273 15:15:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:08:08.273 15:15:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:08.273 15:15:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:08.273 15:15:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:08.273 15:15:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:08.273 15:15:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:08:08.273 15:15:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.273 15:15:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.273 15:15:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.273 15:15:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:08:08.273 15:15:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:08:08.273 15:15:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:08.273 15:15:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.273 15:15:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.273 [2024-11-20 15:15:54.647535] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:08.273 [2024-11-20 15:15:54.647612] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:08.273 [2024-11-20 15:15:54.647636] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:08:08.273 [2024-11-20 15:15:54.647651] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:08.273 [2024-11-20 15:15:54.650255] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:08.273 [2024-11-20 15:15:54.650296] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:08.273 [2024-11-20 15:15:54.650386] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:08.273 [2024-11-20 15:15:54.650447] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:08.274 pt1 00:08:08.274 15:15:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.274 15:15:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:08:08.274 15:15:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:08.274 15:15:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:08.274 15:15:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:08.274 15:15:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:08.274 15:15:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:08.274 15:15:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:08.274 15:15:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:08.274 15:15:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:08.274 15:15:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:08.274 15:15:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:08.274 15:15:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:08.274 15:15:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.274 15:15:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.274 15:15:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.274 15:15:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:08.274 "name": "raid_bdev1", 00:08:08.274 "uuid": "d766bc63-e7a9-4760-9dd9-1d94b41fccff", 00:08:08.274 "strip_size_kb": 0, 00:08:08.274 "state": "configuring", 00:08:08.274 "raid_level": "raid1", 00:08:08.274 "superblock": true, 00:08:08.274 "num_base_bdevs": 2, 00:08:08.274 "num_base_bdevs_discovered": 1, 00:08:08.274 "num_base_bdevs_operational": 2, 00:08:08.274 "base_bdevs_list": [ 00:08:08.274 { 00:08:08.274 "name": "pt1", 00:08:08.274 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:08.274 "is_configured": true, 00:08:08.274 "data_offset": 2048, 00:08:08.274 "data_size": 63488 00:08:08.274 }, 00:08:08.274 { 00:08:08.274 "name": null, 00:08:08.274 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:08.274 "is_configured": false, 00:08:08.274 "data_offset": 2048, 00:08:08.274 "data_size": 63488 00:08:08.274 } 00:08:08.274 ] 00:08:08.274 }' 00:08:08.274 15:15:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:08.274 15:15:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.842 15:15:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:08:08.842 15:15:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:08:08.842 15:15:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:08.842 15:15:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:08.842 15:15:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.842 15:15:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.842 [2024-11-20 15:15:55.075478] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:08.842 [2024-11-20 15:15:55.075573] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:08.842 [2024-11-20 15:15:55.075600] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:08:08.842 [2024-11-20 15:15:55.075616] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:08.842 [2024-11-20 15:15:55.076143] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:08.842 [2024-11-20 15:15:55.076172] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:08.842 [2024-11-20 15:15:55.076264] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:08.842 [2024-11-20 15:15:55.076297] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:08.842 [2024-11-20 15:15:55.076428] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:08.842 [2024-11-20 15:15:55.076442] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:08.842 [2024-11-20 15:15:55.076727] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:08.842 [2024-11-20 15:15:55.076876] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:08.842 [2024-11-20 15:15:55.076887] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:08.842 [2024-11-20 15:15:55.077029] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:08.842 pt2 00:08:08.843 15:15:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.843 15:15:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:08.843 15:15:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:08.843 15:15:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:08.843 15:15:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:08.843 15:15:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:08.843 15:15:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:08.843 15:15:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:08.843 15:15:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:08.843 15:15:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:08.843 15:15:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:08.843 15:15:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:08.843 15:15:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:08.843 15:15:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:08.843 15:15:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:08.843 15:15:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.843 15:15:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.843 15:15:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.843 15:15:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:08.843 "name": "raid_bdev1", 00:08:08.843 "uuid": "d766bc63-e7a9-4760-9dd9-1d94b41fccff", 00:08:08.843 "strip_size_kb": 0, 00:08:08.843 "state": "online", 00:08:08.843 "raid_level": "raid1", 00:08:08.843 "superblock": true, 00:08:08.843 "num_base_bdevs": 2, 00:08:08.843 "num_base_bdevs_discovered": 2, 00:08:08.843 "num_base_bdevs_operational": 2, 00:08:08.843 "base_bdevs_list": [ 00:08:08.843 { 00:08:08.843 "name": "pt1", 00:08:08.843 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:08.843 "is_configured": true, 00:08:08.843 "data_offset": 2048, 00:08:08.843 "data_size": 63488 00:08:08.843 }, 00:08:08.843 { 00:08:08.843 "name": "pt2", 00:08:08.843 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:08.843 "is_configured": true, 00:08:08.843 "data_offset": 2048, 00:08:08.843 "data_size": 63488 00:08:08.843 } 00:08:08.843 ] 00:08:08.843 }' 00:08:08.843 15:15:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:08.843 15:15:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.102 15:15:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:08:09.102 15:15:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:09.102 15:15:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:09.102 15:15:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:09.102 15:15:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:09.102 15:15:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:09.102 15:15:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:09.102 15:15:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:09.102 15:15:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.102 15:15:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.102 [2024-11-20 15:15:55.491699] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:09.102 15:15:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.102 15:15:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:09.102 "name": "raid_bdev1", 00:08:09.102 "aliases": [ 00:08:09.102 "d766bc63-e7a9-4760-9dd9-1d94b41fccff" 00:08:09.102 ], 00:08:09.102 "product_name": "Raid Volume", 00:08:09.102 "block_size": 512, 00:08:09.102 "num_blocks": 63488, 00:08:09.102 "uuid": "d766bc63-e7a9-4760-9dd9-1d94b41fccff", 00:08:09.102 "assigned_rate_limits": { 00:08:09.102 "rw_ios_per_sec": 0, 00:08:09.102 "rw_mbytes_per_sec": 0, 00:08:09.102 "r_mbytes_per_sec": 0, 00:08:09.102 "w_mbytes_per_sec": 0 00:08:09.102 }, 00:08:09.102 "claimed": false, 00:08:09.102 "zoned": false, 00:08:09.102 "supported_io_types": { 00:08:09.102 "read": true, 00:08:09.102 "write": true, 00:08:09.102 "unmap": false, 00:08:09.102 "flush": false, 00:08:09.102 "reset": true, 00:08:09.102 "nvme_admin": false, 00:08:09.102 "nvme_io": false, 00:08:09.102 "nvme_io_md": false, 00:08:09.102 "write_zeroes": true, 00:08:09.102 "zcopy": false, 00:08:09.102 "get_zone_info": false, 00:08:09.102 "zone_management": false, 00:08:09.102 "zone_append": false, 00:08:09.102 "compare": false, 00:08:09.102 "compare_and_write": false, 00:08:09.102 "abort": false, 00:08:09.102 "seek_hole": false, 00:08:09.102 "seek_data": false, 00:08:09.102 "copy": false, 00:08:09.102 "nvme_iov_md": false 00:08:09.102 }, 00:08:09.102 "memory_domains": [ 00:08:09.102 { 00:08:09.102 "dma_device_id": "system", 00:08:09.102 "dma_device_type": 1 00:08:09.102 }, 00:08:09.102 { 00:08:09.102 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:09.102 "dma_device_type": 2 00:08:09.102 }, 00:08:09.102 { 00:08:09.102 "dma_device_id": "system", 00:08:09.102 "dma_device_type": 1 00:08:09.102 }, 00:08:09.102 { 00:08:09.102 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:09.102 "dma_device_type": 2 00:08:09.102 } 00:08:09.102 ], 00:08:09.102 "driver_specific": { 00:08:09.102 "raid": { 00:08:09.102 "uuid": "d766bc63-e7a9-4760-9dd9-1d94b41fccff", 00:08:09.102 "strip_size_kb": 0, 00:08:09.102 "state": "online", 00:08:09.102 "raid_level": "raid1", 00:08:09.102 "superblock": true, 00:08:09.102 "num_base_bdevs": 2, 00:08:09.102 "num_base_bdevs_discovered": 2, 00:08:09.102 "num_base_bdevs_operational": 2, 00:08:09.102 "base_bdevs_list": [ 00:08:09.102 { 00:08:09.102 "name": "pt1", 00:08:09.102 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:09.102 "is_configured": true, 00:08:09.102 "data_offset": 2048, 00:08:09.102 "data_size": 63488 00:08:09.102 }, 00:08:09.102 { 00:08:09.102 "name": "pt2", 00:08:09.102 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:09.102 "is_configured": true, 00:08:09.102 "data_offset": 2048, 00:08:09.102 "data_size": 63488 00:08:09.102 } 00:08:09.102 ] 00:08:09.102 } 00:08:09.102 } 00:08:09.102 }' 00:08:09.102 15:15:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:09.102 15:15:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:09.102 pt2' 00:08:09.102 15:15:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:09.362 15:15:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:09.362 15:15:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:09.362 15:15:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:09.362 15:15:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.362 15:15:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:09.362 15:15:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.362 15:15:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.362 15:15:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:09.362 15:15:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:09.362 15:15:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:09.362 15:15:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:09.362 15:15:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.362 15:15:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.362 15:15:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:09.362 15:15:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.362 15:15:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:09.362 15:15:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:09.362 15:15:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:08:09.362 15:15:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:09.362 15:15:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.362 15:15:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.362 [2024-11-20 15:15:55.723683] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:09.362 15:15:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.362 15:15:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' d766bc63-e7a9-4760-9dd9-1d94b41fccff '!=' d766bc63-e7a9-4760-9dd9-1d94b41fccff ']' 00:08:09.362 15:15:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:08:09.362 15:15:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:09.362 15:15:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:09.362 15:15:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:08:09.362 15:15:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.362 15:15:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.362 [2024-11-20 15:15:55.771494] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:08:09.362 15:15:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.362 15:15:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:09.362 15:15:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:09.362 15:15:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:09.362 15:15:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:09.362 15:15:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:09.362 15:15:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:09.362 15:15:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:09.362 15:15:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:09.362 15:15:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:09.362 15:15:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:09.362 15:15:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:09.362 15:15:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.362 15:15:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.362 15:15:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:09.362 15:15:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.362 15:15:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:09.362 "name": "raid_bdev1", 00:08:09.362 "uuid": "d766bc63-e7a9-4760-9dd9-1d94b41fccff", 00:08:09.362 "strip_size_kb": 0, 00:08:09.362 "state": "online", 00:08:09.362 "raid_level": "raid1", 00:08:09.362 "superblock": true, 00:08:09.362 "num_base_bdevs": 2, 00:08:09.362 "num_base_bdevs_discovered": 1, 00:08:09.362 "num_base_bdevs_operational": 1, 00:08:09.362 "base_bdevs_list": [ 00:08:09.362 { 00:08:09.362 "name": null, 00:08:09.362 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:09.362 "is_configured": false, 00:08:09.362 "data_offset": 0, 00:08:09.362 "data_size": 63488 00:08:09.362 }, 00:08:09.362 { 00:08:09.362 "name": "pt2", 00:08:09.362 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:09.362 "is_configured": true, 00:08:09.362 "data_offset": 2048, 00:08:09.362 "data_size": 63488 00:08:09.362 } 00:08:09.362 ] 00:08:09.362 }' 00:08:09.362 15:15:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:09.362 15:15:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.931 15:15:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:09.931 15:15:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.931 15:15:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.931 [2024-11-20 15:15:56.199433] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:09.931 [2024-11-20 15:15:56.199477] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:09.931 [2024-11-20 15:15:56.199563] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:09.931 [2024-11-20 15:15:56.199613] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:09.931 [2024-11-20 15:15:56.199627] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:09.931 15:15:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.931 15:15:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:09.931 15:15:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.931 15:15:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.931 15:15:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:08:09.931 15:15:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.931 15:15:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:08:09.931 15:15:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:08:09.931 15:15:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:08:09.931 15:15:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:08:09.931 15:15:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:08:09.931 15:15:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.931 15:15:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.931 15:15:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.931 15:15:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:08:09.931 15:15:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:08:09.931 15:15:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:08:09.931 15:15:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:08:09.931 15:15:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=1 00:08:09.931 15:15:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:09.931 15:15:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.931 15:15:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.931 [2024-11-20 15:15:56.271432] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:09.931 [2024-11-20 15:15:56.271507] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:09.931 [2024-11-20 15:15:56.271528] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:08:09.931 [2024-11-20 15:15:56.271543] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:09.931 [2024-11-20 15:15:56.274182] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:09.931 [2024-11-20 15:15:56.274227] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:09.931 [2024-11-20 15:15:56.274323] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:09.931 [2024-11-20 15:15:56.274379] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:09.931 [2024-11-20 15:15:56.274478] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:08:09.931 [2024-11-20 15:15:56.274495] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:09.931 [2024-11-20 15:15:56.274769] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:08:09.931 [2024-11-20 15:15:56.274922] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:08:09.931 [2024-11-20 15:15:56.274933] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:08:09.931 [2024-11-20 15:15:56.275079] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:09.931 pt2 00:08:09.931 15:15:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.931 15:15:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:09.931 15:15:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:09.931 15:15:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:09.931 15:15:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:09.931 15:15:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:09.931 15:15:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:09.931 15:15:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:09.931 15:15:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:09.931 15:15:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:09.931 15:15:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:09.931 15:15:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:09.931 15:15:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.931 15:15:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.931 15:15:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:09.931 15:15:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.931 15:15:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:09.931 "name": "raid_bdev1", 00:08:09.932 "uuid": "d766bc63-e7a9-4760-9dd9-1d94b41fccff", 00:08:09.932 "strip_size_kb": 0, 00:08:09.932 "state": "online", 00:08:09.932 "raid_level": "raid1", 00:08:09.932 "superblock": true, 00:08:09.932 "num_base_bdevs": 2, 00:08:09.932 "num_base_bdevs_discovered": 1, 00:08:09.932 "num_base_bdevs_operational": 1, 00:08:09.932 "base_bdevs_list": [ 00:08:09.932 { 00:08:09.932 "name": null, 00:08:09.932 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:09.932 "is_configured": false, 00:08:09.932 "data_offset": 2048, 00:08:09.932 "data_size": 63488 00:08:09.932 }, 00:08:09.932 { 00:08:09.932 "name": "pt2", 00:08:09.932 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:09.932 "is_configured": true, 00:08:09.932 "data_offset": 2048, 00:08:09.932 "data_size": 63488 00:08:09.932 } 00:08:09.932 ] 00:08:09.932 }' 00:08:09.932 15:15:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:09.932 15:15:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.501 15:15:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:10.501 15:15:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.501 15:15:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.501 [2024-11-20 15:15:56.723387] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:10.501 [2024-11-20 15:15:56.723422] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:10.501 [2024-11-20 15:15:56.723501] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:10.501 [2024-11-20 15:15:56.723555] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:10.501 [2024-11-20 15:15:56.723567] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:08:10.501 15:15:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.501 15:15:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:10.501 15:15:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.501 15:15:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.501 15:15:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:08:10.501 15:15:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.501 15:15:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:08:10.501 15:15:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:08:10.501 15:15:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:08:10.501 15:15:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:10.501 15:15:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.501 15:15:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.501 [2024-11-20 15:15:56.787426] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:10.501 [2024-11-20 15:15:56.787493] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:10.501 [2024-11-20 15:15:56.787518] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:08:10.501 [2024-11-20 15:15:56.787531] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:10.501 [2024-11-20 15:15:56.790181] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:10.501 [2024-11-20 15:15:56.790219] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:10.501 [2024-11-20 15:15:56.790320] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:10.501 [2024-11-20 15:15:56.790365] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:10.501 [2024-11-20 15:15:56.790506] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:08:10.501 [2024-11-20 15:15:56.790525] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:10.501 [2024-11-20 15:15:56.790544] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:08:10.501 [2024-11-20 15:15:56.790605] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:10.501 [2024-11-20 15:15:56.790698] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:08:10.501 [2024-11-20 15:15:56.790709] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:10.501 [2024-11-20 15:15:56.790995] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:08:10.501 pt1 00:08:10.501 [2024-11-20 15:15:56.791145] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:08:10.501 [2024-11-20 15:15:56.791165] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:08:10.501 [2024-11-20 15:15:56.791376] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:10.501 15:15:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.501 15:15:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:08:10.501 15:15:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:10.501 15:15:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:10.501 15:15:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:10.501 15:15:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:10.501 15:15:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:10.501 15:15:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:10.501 15:15:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:10.501 15:15:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:10.501 15:15:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:10.501 15:15:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:10.501 15:15:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:10.501 15:15:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:10.501 15:15:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.501 15:15:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.501 15:15:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.501 15:15:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:10.501 "name": "raid_bdev1", 00:08:10.501 "uuid": "d766bc63-e7a9-4760-9dd9-1d94b41fccff", 00:08:10.501 "strip_size_kb": 0, 00:08:10.501 "state": "online", 00:08:10.501 "raid_level": "raid1", 00:08:10.501 "superblock": true, 00:08:10.501 "num_base_bdevs": 2, 00:08:10.501 "num_base_bdevs_discovered": 1, 00:08:10.501 "num_base_bdevs_operational": 1, 00:08:10.501 "base_bdevs_list": [ 00:08:10.501 { 00:08:10.502 "name": null, 00:08:10.502 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:10.502 "is_configured": false, 00:08:10.502 "data_offset": 2048, 00:08:10.502 "data_size": 63488 00:08:10.502 }, 00:08:10.502 { 00:08:10.502 "name": "pt2", 00:08:10.502 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:10.502 "is_configured": true, 00:08:10.502 "data_offset": 2048, 00:08:10.502 "data_size": 63488 00:08:10.502 } 00:08:10.502 ] 00:08:10.502 }' 00:08:10.502 15:15:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:10.502 15:15:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.761 15:15:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:08:10.761 15:15:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.761 15:15:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.761 15:15:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:08:10.761 15:15:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.761 15:15:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:08:10.761 15:15:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:10.761 15:15:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.761 15:15:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.761 15:15:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:08:10.761 [2024-11-20 15:15:57.235635] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:11.020 15:15:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.020 15:15:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' d766bc63-e7a9-4760-9dd9-1d94b41fccff '!=' d766bc63-e7a9-4760-9dd9-1d94b41fccff ']' 00:08:11.020 15:15:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 63080 00:08:11.020 15:15:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 63080 ']' 00:08:11.020 15:15:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 63080 00:08:11.020 15:15:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:08:11.020 15:15:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:11.020 15:15:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63080 00:08:11.020 15:15:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:11.020 killing process with pid 63080 00:08:11.020 15:15:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:11.020 15:15:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63080' 00:08:11.020 15:15:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 63080 00:08:11.020 [2024-11-20 15:15:57.312357] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:11.020 15:15:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 63080 00:08:11.020 [2024-11-20 15:15:57.312460] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:11.020 [2024-11-20 15:15:57.312511] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:11.020 [2024-11-20 15:15:57.312532] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:08:11.279 [2024-11-20 15:15:57.523195] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:12.220 15:15:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:08:12.220 00:08:12.220 real 0m6.032s 00:08:12.220 user 0m9.060s 00:08:12.220 sys 0m1.130s 00:08:12.220 15:15:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:12.220 15:15:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.220 ************************************ 00:08:12.220 END TEST raid_superblock_test 00:08:12.220 ************************************ 00:08:12.479 15:15:58 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 2 read 00:08:12.479 15:15:58 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:12.479 15:15:58 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:12.479 15:15:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:12.479 ************************************ 00:08:12.479 START TEST raid_read_error_test 00:08:12.479 ************************************ 00:08:12.479 15:15:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 2 read 00:08:12.479 15:15:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:08:12.479 15:15:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:08:12.479 15:15:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:08:12.479 15:15:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:12.479 15:15:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:12.479 15:15:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:12.479 15:15:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:12.479 15:15:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:12.479 15:15:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:12.479 15:15:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:12.479 15:15:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:12.479 15:15:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:12.479 15:15:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:12.479 15:15:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:12.479 15:15:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:12.479 15:15:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:12.479 15:15:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:12.479 15:15:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:12.479 15:15:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:08:12.479 15:15:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:08:12.479 15:15:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:12.479 15:15:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.DJjhihvfd7 00:08:12.479 15:15:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=63405 00:08:12.479 15:15:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:12.479 15:15:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 63405 00:08:12.479 15:15:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 63405 ']' 00:08:12.479 15:15:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:12.479 15:15:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:12.479 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:12.479 15:15:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:12.479 15:15:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:12.479 15:15:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.479 [2024-11-20 15:15:58.881896] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:08:12.479 [2024-11-20 15:15:58.882064] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63405 ] 00:08:12.737 [2024-11-20 15:15:59.083616] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:12.995 [2024-11-20 15:15:59.232385] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:12.995 [2024-11-20 15:15:59.450771] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:12.995 [2024-11-20 15:15:59.450832] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:13.253 15:15:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:13.253 15:15:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:08:13.253 15:15:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:13.253 15:15:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:13.253 15:15:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.253 15:15:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.512 BaseBdev1_malloc 00:08:13.512 15:15:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.512 15:15:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:13.512 15:15:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.512 15:15:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.512 true 00:08:13.512 15:15:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.512 15:15:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:13.512 15:15:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.512 15:15:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.512 [2024-11-20 15:15:59.788041] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:13.512 [2024-11-20 15:15:59.788125] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:13.512 [2024-11-20 15:15:59.788151] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:13.512 [2024-11-20 15:15:59.788167] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:13.512 [2024-11-20 15:15:59.790721] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:13.512 [2024-11-20 15:15:59.790759] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:13.512 BaseBdev1 00:08:13.512 15:15:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.512 15:15:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:13.512 15:15:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:13.512 15:15:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.512 15:15:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.512 BaseBdev2_malloc 00:08:13.512 15:15:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.512 15:15:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:13.512 15:15:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.512 15:15:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.512 true 00:08:13.512 15:15:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.512 15:15:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:13.512 15:15:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.512 15:15:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.512 [2024-11-20 15:15:59.855763] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:13.512 [2024-11-20 15:15:59.855840] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:13.512 [2024-11-20 15:15:59.855863] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:13.512 [2024-11-20 15:15:59.855878] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:13.512 [2024-11-20 15:15:59.858482] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:13.512 [2024-11-20 15:15:59.858526] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:13.512 BaseBdev2 00:08:13.512 15:15:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.512 15:15:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:08:13.512 15:15:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.512 15:15:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.512 [2024-11-20 15:15:59.867821] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:13.512 [2024-11-20 15:15:59.870103] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:13.512 [2024-11-20 15:15:59.870338] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:13.512 [2024-11-20 15:15:59.870356] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:13.512 [2024-11-20 15:15:59.870684] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:13.512 [2024-11-20 15:15:59.870889] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:13.512 [2024-11-20 15:15:59.870913] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:13.512 [2024-11-20 15:15:59.871102] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:13.512 15:15:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.512 15:15:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:13.512 15:15:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:13.512 15:15:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:13.512 15:15:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:13.512 15:15:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:13.512 15:15:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:13.513 15:15:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:13.513 15:15:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:13.513 15:15:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:13.513 15:15:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:13.513 15:15:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:13.513 15:15:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:13.513 15:15:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.513 15:15:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.513 15:15:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.513 15:15:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:13.513 "name": "raid_bdev1", 00:08:13.513 "uuid": "32d48b77-37c7-4247-a0e7-8fdd93905214", 00:08:13.513 "strip_size_kb": 0, 00:08:13.513 "state": "online", 00:08:13.513 "raid_level": "raid1", 00:08:13.513 "superblock": true, 00:08:13.513 "num_base_bdevs": 2, 00:08:13.513 "num_base_bdevs_discovered": 2, 00:08:13.513 "num_base_bdevs_operational": 2, 00:08:13.513 "base_bdevs_list": [ 00:08:13.513 { 00:08:13.513 "name": "BaseBdev1", 00:08:13.513 "uuid": "a91e9b0c-5546-5de9-b58d-b8a2b519af93", 00:08:13.513 "is_configured": true, 00:08:13.513 "data_offset": 2048, 00:08:13.513 "data_size": 63488 00:08:13.513 }, 00:08:13.513 { 00:08:13.513 "name": "BaseBdev2", 00:08:13.513 "uuid": "87cbf9bc-a37d-5c11-9ed0-bc601c49cccd", 00:08:13.513 "is_configured": true, 00:08:13.513 "data_offset": 2048, 00:08:13.513 "data_size": 63488 00:08:13.513 } 00:08:13.513 ] 00:08:13.513 }' 00:08:13.513 15:15:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:13.513 15:15:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.080 15:16:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:14.080 15:16:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:14.080 [2024-11-20 15:16:00.428810] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:08:15.017 15:16:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:08:15.017 15:16:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.017 15:16:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.017 15:16:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.017 15:16:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:15.017 15:16:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:08:15.017 15:16:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:08:15.017 15:16:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:08:15.017 15:16:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:15.017 15:16:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:15.017 15:16:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:15.017 15:16:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:15.017 15:16:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:15.017 15:16:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:15.017 15:16:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:15.017 15:16:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:15.017 15:16:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:15.017 15:16:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:15.017 15:16:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:15.017 15:16:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:15.017 15:16:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.017 15:16:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.017 15:16:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.017 15:16:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:15.017 "name": "raid_bdev1", 00:08:15.017 "uuid": "32d48b77-37c7-4247-a0e7-8fdd93905214", 00:08:15.017 "strip_size_kb": 0, 00:08:15.017 "state": "online", 00:08:15.017 "raid_level": "raid1", 00:08:15.017 "superblock": true, 00:08:15.017 "num_base_bdevs": 2, 00:08:15.017 "num_base_bdevs_discovered": 2, 00:08:15.017 "num_base_bdevs_operational": 2, 00:08:15.017 "base_bdevs_list": [ 00:08:15.017 { 00:08:15.017 "name": "BaseBdev1", 00:08:15.017 "uuid": "a91e9b0c-5546-5de9-b58d-b8a2b519af93", 00:08:15.017 "is_configured": true, 00:08:15.017 "data_offset": 2048, 00:08:15.017 "data_size": 63488 00:08:15.017 }, 00:08:15.017 { 00:08:15.017 "name": "BaseBdev2", 00:08:15.017 "uuid": "87cbf9bc-a37d-5c11-9ed0-bc601c49cccd", 00:08:15.017 "is_configured": true, 00:08:15.017 "data_offset": 2048, 00:08:15.017 "data_size": 63488 00:08:15.017 } 00:08:15.017 ] 00:08:15.017 }' 00:08:15.017 15:16:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:15.017 15:16:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.585 15:16:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:15.585 15:16:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.585 15:16:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.586 [2024-11-20 15:16:01.765499] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:15.586 [2024-11-20 15:16:01.765538] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:15.586 [2024-11-20 15:16:01.768453] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:15.586 [2024-11-20 15:16:01.768632] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:15.586 [2024-11-20 15:16:01.768788] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:15.586 [2024-11-20 15:16:01.768974] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:15.586 { 00:08:15.586 "results": [ 00:08:15.586 { 00:08:15.586 "job": "raid_bdev1", 00:08:15.586 "core_mask": "0x1", 00:08:15.586 "workload": "randrw", 00:08:15.586 "percentage": 50, 00:08:15.586 "status": "finished", 00:08:15.586 "queue_depth": 1, 00:08:15.586 "io_size": 131072, 00:08:15.586 "runtime": 1.336593, 00:08:15.586 "iops": 17714.442616413522, 00:08:15.586 "mibps": 2214.3053270516903, 00:08:15.586 "io_failed": 0, 00:08:15.586 "io_timeout": 0, 00:08:15.586 "avg_latency_us": 53.725666767250615, 00:08:15.586 "min_latency_us": 24.160642570281123, 00:08:15.586 "max_latency_us": 1552.8610441767069 00:08:15.586 } 00:08:15.586 ], 00:08:15.586 "core_count": 1 00:08:15.586 } 00:08:15.586 15:16:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.586 15:16:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 63405 00:08:15.586 15:16:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 63405 ']' 00:08:15.586 15:16:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 63405 00:08:15.586 15:16:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:08:15.586 15:16:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:15.586 15:16:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63405 00:08:15.586 killing process with pid 63405 00:08:15.586 15:16:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:15.586 15:16:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:15.586 15:16:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63405' 00:08:15.586 15:16:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 63405 00:08:15.586 [2024-11-20 15:16:01.823332] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:15.586 15:16:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 63405 00:08:15.586 [2024-11-20 15:16:01.962170] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:16.996 15:16:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:16.996 15:16:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.DJjhihvfd7 00:08:16.996 15:16:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:16.996 ************************************ 00:08:16.996 END TEST raid_read_error_test 00:08:16.996 ************************************ 00:08:16.996 15:16:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:08:16.996 15:16:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:08:16.996 15:16:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:16.996 15:16:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:16.996 15:16:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:08:16.996 00:08:16.996 real 0m4.415s 00:08:16.996 user 0m5.239s 00:08:16.996 sys 0m0.640s 00:08:16.996 15:16:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:16.996 15:16:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.996 15:16:03 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 2 write 00:08:16.996 15:16:03 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:16.996 15:16:03 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:16.996 15:16:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:16.996 ************************************ 00:08:16.996 START TEST raid_write_error_test 00:08:16.996 ************************************ 00:08:16.996 15:16:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 2 write 00:08:16.996 15:16:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:08:16.996 15:16:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:08:16.997 15:16:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:08:16.997 15:16:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:16.997 15:16:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:16.997 15:16:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:16.997 15:16:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:16.997 15:16:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:16.997 15:16:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:16.997 15:16:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:16.997 15:16:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:16.997 15:16:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:16.997 15:16:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:16.997 15:16:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:16.997 15:16:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:16.997 15:16:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:16.997 15:16:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:16.997 15:16:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:16.997 15:16:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:08:16.997 15:16:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:08:16.997 15:16:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:16.997 15:16:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.SfFx2t2PPt 00:08:16.997 15:16:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=63550 00:08:16.997 15:16:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 63550 00:08:16.997 15:16:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:16.997 15:16:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 63550 ']' 00:08:16.997 15:16:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:16.997 15:16:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:16.997 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:16.997 15:16:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:16.997 15:16:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:16.997 15:16:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.997 [2024-11-20 15:16:03.365035] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:08:16.997 [2024-11-20 15:16:03.365163] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63550 ] 00:08:17.256 [2024-11-20 15:16:03.539881] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:17.256 [2024-11-20 15:16:03.661851] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:17.514 [2024-11-20 15:16:03.877671] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:17.514 [2024-11-20 15:16:03.877724] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:17.774 15:16:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:17.774 15:16:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:08:17.774 15:16:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:17.774 15:16:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:17.774 15:16:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.774 15:16:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.774 BaseBdev1_malloc 00:08:17.774 15:16:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.774 15:16:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:17.774 15:16:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.774 15:16:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.034 true 00:08:18.034 15:16:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.034 15:16:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:18.034 15:16:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.034 15:16:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.034 [2024-11-20 15:16:04.259747] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:18.034 [2024-11-20 15:16:04.259956] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:18.034 [2024-11-20 15:16:04.260022] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:18.034 [2024-11-20 15:16:04.260220] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:18.034 [2024-11-20 15:16:04.262830] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:18.034 [2024-11-20 15:16:04.262995] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:18.034 BaseBdev1 00:08:18.034 15:16:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.034 15:16:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:18.034 15:16:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:18.034 15:16:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.034 15:16:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.034 BaseBdev2_malloc 00:08:18.034 15:16:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.034 15:16:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:18.034 15:16:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.034 15:16:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.034 true 00:08:18.034 15:16:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.034 15:16:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:18.034 15:16:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.034 15:16:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.034 [2024-11-20 15:16:04.325528] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:18.034 [2024-11-20 15:16:04.325598] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:18.034 [2024-11-20 15:16:04.325621] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:18.034 [2024-11-20 15:16:04.325635] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:18.034 [2024-11-20 15:16:04.328061] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:18.034 [2024-11-20 15:16:04.328106] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:18.034 BaseBdev2 00:08:18.034 15:16:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.034 15:16:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:08:18.034 15:16:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.034 15:16:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.034 [2024-11-20 15:16:04.337576] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:18.034 [2024-11-20 15:16:04.339847] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:18.034 [2024-11-20 15:16:04.340056] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:18.034 [2024-11-20 15:16:04.340073] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:18.034 [2024-11-20 15:16:04.340352] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:18.034 [2024-11-20 15:16:04.340533] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:18.034 [2024-11-20 15:16:04.340544] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:18.034 [2024-11-20 15:16:04.340738] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:18.034 15:16:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.034 15:16:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:18.034 15:16:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:18.034 15:16:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:18.034 15:16:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:18.034 15:16:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:18.034 15:16:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:18.034 15:16:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:18.034 15:16:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:18.034 15:16:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:18.034 15:16:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:18.034 15:16:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:18.034 15:16:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:18.034 15:16:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.034 15:16:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.034 15:16:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.035 15:16:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:18.035 "name": "raid_bdev1", 00:08:18.035 "uuid": "58662e91-3ec5-49a8-8f26-fe3068d92aed", 00:08:18.035 "strip_size_kb": 0, 00:08:18.035 "state": "online", 00:08:18.035 "raid_level": "raid1", 00:08:18.035 "superblock": true, 00:08:18.035 "num_base_bdevs": 2, 00:08:18.035 "num_base_bdevs_discovered": 2, 00:08:18.035 "num_base_bdevs_operational": 2, 00:08:18.035 "base_bdevs_list": [ 00:08:18.035 { 00:08:18.035 "name": "BaseBdev1", 00:08:18.035 "uuid": "40697b3e-8f2e-5754-89cf-0b2d3cb63711", 00:08:18.035 "is_configured": true, 00:08:18.035 "data_offset": 2048, 00:08:18.035 "data_size": 63488 00:08:18.035 }, 00:08:18.035 { 00:08:18.035 "name": "BaseBdev2", 00:08:18.035 "uuid": "9de83191-a8a8-58d0-a9c9-3df7e1f54989", 00:08:18.035 "is_configured": true, 00:08:18.035 "data_offset": 2048, 00:08:18.035 "data_size": 63488 00:08:18.035 } 00:08:18.035 ] 00:08:18.035 }' 00:08:18.035 15:16:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:18.035 15:16:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.602 15:16:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:18.602 15:16:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:18.602 [2024-11-20 15:16:04.890121] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:08:19.538 15:16:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:08:19.538 15:16:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.538 15:16:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.538 [2024-11-20 15:16:05.798914] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:08:19.538 [2024-11-20 15:16:05.798993] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:19.538 [2024-11-20 15:16:05.799192] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d0000063c0 00:08:19.538 15:16:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.538 15:16:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:19.538 15:16:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:08:19.538 15:16:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:08:19.538 15:16:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=1 00:08:19.538 15:16:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:19.538 15:16:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:19.538 15:16:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:19.538 15:16:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:19.538 15:16:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:19.538 15:16:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:19.538 15:16:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:19.538 15:16:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:19.538 15:16:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:19.538 15:16:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:19.538 15:16:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:19.538 15:16:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:19.538 15:16:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.538 15:16:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.538 15:16:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.538 15:16:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:19.538 "name": "raid_bdev1", 00:08:19.538 "uuid": "58662e91-3ec5-49a8-8f26-fe3068d92aed", 00:08:19.538 "strip_size_kb": 0, 00:08:19.538 "state": "online", 00:08:19.538 "raid_level": "raid1", 00:08:19.538 "superblock": true, 00:08:19.538 "num_base_bdevs": 2, 00:08:19.538 "num_base_bdevs_discovered": 1, 00:08:19.538 "num_base_bdevs_operational": 1, 00:08:19.538 "base_bdevs_list": [ 00:08:19.538 { 00:08:19.538 "name": null, 00:08:19.538 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:19.538 "is_configured": false, 00:08:19.538 "data_offset": 0, 00:08:19.538 "data_size": 63488 00:08:19.538 }, 00:08:19.538 { 00:08:19.538 "name": "BaseBdev2", 00:08:19.538 "uuid": "9de83191-a8a8-58d0-a9c9-3df7e1f54989", 00:08:19.538 "is_configured": true, 00:08:19.538 "data_offset": 2048, 00:08:19.538 "data_size": 63488 00:08:19.538 } 00:08:19.538 ] 00:08:19.538 }' 00:08:19.538 15:16:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:19.538 15:16:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.797 15:16:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:19.797 15:16:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.797 15:16:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.797 [2024-11-20 15:16:06.236059] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:19.797 [2024-11-20 15:16:06.236092] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:19.797 [2024-11-20 15:16:06.238725] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:19.797 [2024-11-20 15:16:06.238768] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:19.797 [2024-11-20 15:16:06.238830] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:19.797 [2024-11-20 15:16:06.238845] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:19.797 { 00:08:19.797 "results": [ 00:08:19.797 { 00:08:19.797 "job": "raid_bdev1", 00:08:19.797 "core_mask": "0x1", 00:08:19.797 "workload": "randrw", 00:08:19.797 "percentage": 50, 00:08:19.797 "status": "finished", 00:08:19.797 "queue_depth": 1, 00:08:19.797 "io_size": 131072, 00:08:19.797 "runtime": 1.345862, 00:08:19.797 "iops": 20641.04640743256, 00:08:19.797 "mibps": 2580.13080092907, 00:08:19.797 "io_failed": 0, 00:08:19.797 "io_timeout": 0, 00:08:19.798 "avg_latency_us": 45.688843552756744, 00:08:19.798 "min_latency_us": 23.440963855421685, 00:08:19.798 "max_latency_us": 1592.340562248996 00:08:19.798 } 00:08:19.798 ], 00:08:19.798 "core_count": 1 00:08:19.798 } 00:08:19.798 15:16:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.798 15:16:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 63550 00:08:19.798 15:16:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 63550 ']' 00:08:19.798 15:16:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 63550 00:08:19.798 15:16:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:08:19.798 15:16:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:19.798 15:16:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63550 00:08:20.121 killing process with pid 63550 00:08:20.121 15:16:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:20.121 15:16:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:20.121 15:16:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63550' 00:08:20.121 15:16:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 63550 00:08:20.121 [2024-11-20 15:16:06.290120] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:20.121 15:16:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 63550 00:08:20.121 [2024-11-20 15:16:06.431995] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:21.534 15:16:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.SfFx2t2PPt 00:08:21.534 15:16:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:21.534 15:16:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:21.534 15:16:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:08:21.534 15:16:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:08:21.534 ************************************ 00:08:21.534 END TEST raid_write_error_test 00:08:21.534 ************************************ 00:08:21.534 15:16:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:21.534 15:16:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:21.534 15:16:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:08:21.534 00:08:21.534 real 0m4.428s 00:08:21.534 user 0m5.279s 00:08:21.534 sys 0m0.588s 00:08:21.534 15:16:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:21.534 15:16:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.534 15:16:07 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:08:21.534 15:16:07 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:08:21.534 15:16:07 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:08:21.534 15:16:07 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:21.534 15:16:07 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:21.534 15:16:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:21.534 ************************************ 00:08:21.534 START TEST raid_state_function_test 00:08:21.534 ************************************ 00:08:21.534 15:16:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 3 false 00:08:21.534 15:16:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:08:21.534 15:16:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:08:21.534 15:16:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:08:21.534 15:16:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:21.534 15:16:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:21.534 15:16:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:21.534 15:16:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:21.534 15:16:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:21.534 15:16:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:21.534 15:16:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:21.534 15:16:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:21.534 15:16:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:21.534 15:16:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:21.534 15:16:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:21.534 15:16:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:21.534 15:16:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:21.534 15:16:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:21.534 15:16:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:21.534 15:16:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:21.534 15:16:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:21.534 15:16:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:21.534 15:16:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:08:21.534 15:16:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:21.534 15:16:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:21.534 15:16:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:08:21.534 15:16:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:08:21.534 Process raid pid: 63688 00:08:21.534 15:16:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=63688 00:08:21.534 15:16:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 63688' 00:08:21.534 15:16:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:21.534 15:16:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 63688 00:08:21.534 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:21.534 15:16:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 63688 ']' 00:08:21.534 15:16:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:21.534 15:16:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:21.534 15:16:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:21.534 15:16:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:21.534 15:16:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.534 [2024-11-20 15:16:07.863202] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:08:21.534 [2024-11-20 15:16:07.863537] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:21.793 [2024-11-20 15:16:08.050094] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:21.793 [2024-11-20 15:16:08.174857] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:22.051 [2024-11-20 15:16:08.400682] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:22.051 [2024-11-20 15:16:08.400902] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:22.309 15:16:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:22.309 15:16:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:08:22.309 15:16:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:22.309 15:16:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.309 15:16:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.310 [2024-11-20 15:16:08.722074] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:22.310 [2024-11-20 15:16:08.722132] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:22.310 [2024-11-20 15:16:08.722144] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:22.310 [2024-11-20 15:16:08.722157] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:22.310 [2024-11-20 15:16:08.722165] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:22.310 [2024-11-20 15:16:08.722177] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:22.310 15:16:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.310 15:16:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:22.310 15:16:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:22.310 15:16:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:22.310 15:16:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:22.310 15:16:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:22.310 15:16:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:22.310 15:16:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:22.310 15:16:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:22.310 15:16:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:22.310 15:16:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:22.310 15:16:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:22.310 15:16:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:22.310 15:16:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.310 15:16:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.310 15:16:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.310 15:16:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:22.310 "name": "Existed_Raid", 00:08:22.310 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:22.310 "strip_size_kb": 64, 00:08:22.310 "state": "configuring", 00:08:22.310 "raid_level": "raid0", 00:08:22.310 "superblock": false, 00:08:22.310 "num_base_bdevs": 3, 00:08:22.310 "num_base_bdevs_discovered": 0, 00:08:22.310 "num_base_bdevs_operational": 3, 00:08:22.310 "base_bdevs_list": [ 00:08:22.310 { 00:08:22.310 "name": "BaseBdev1", 00:08:22.310 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:22.310 "is_configured": false, 00:08:22.310 "data_offset": 0, 00:08:22.310 "data_size": 0 00:08:22.310 }, 00:08:22.310 { 00:08:22.310 "name": "BaseBdev2", 00:08:22.310 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:22.310 "is_configured": false, 00:08:22.310 "data_offset": 0, 00:08:22.310 "data_size": 0 00:08:22.310 }, 00:08:22.310 { 00:08:22.310 "name": "BaseBdev3", 00:08:22.310 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:22.310 "is_configured": false, 00:08:22.310 "data_offset": 0, 00:08:22.310 "data_size": 0 00:08:22.310 } 00:08:22.310 ] 00:08:22.310 }' 00:08:22.310 15:16:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:22.310 15:16:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.877 15:16:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:22.877 15:16:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.877 15:16:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.877 [2024-11-20 15:16:09.141443] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:22.877 [2024-11-20 15:16:09.141481] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:22.877 15:16:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.877 15:16:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:22.877 15:16:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.877 15:16:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.877 [2024-11-20 15:16:09.153416] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:22.877 [2024-11-20 15:16:09.153466] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:22.877 [2024-11-20 15:16:09.153477] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:22.877 [2024-11-20 15:16:09.153491] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:22.877 [2024-11-20 15:16:09.153499] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:22.877 [2024-11-20 15:16:09.153512] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:22.877 15:16:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.877 15:16:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:22.877 15:16:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.877 15:16:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.877 [2024-11-20 15:16:09.204640] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:22.877 BaseBdev1 00:08:22.877 15:16:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.877 15:16:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:22.877 15:16:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:22.877 15:16:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:22.877 15:16:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:22.877 15:16:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:22.877 15:16:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:22.877 15:16:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:22.877 15:16:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.877 15:16:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.877 15:16:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.877 15:16:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:22.877 15:16:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.877 15:16:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.877 [ 00:08:22.877 { 00:08:22.877 "name": "BaseBdev1", 00:08:22.877 "aliases": [ 00:08:22.877 "dd0927d1-cf27-4aa6-838c-b12586dba146" 00:08:22.877 ], 00:08:22.877 "product_name": "Malloc disk", 00:08:22.877 "block_size": 512, 00:08:22.877 "num_blocks": 65536, 00:08:22.877 "uuid": "dd0927d1-cf27-4aa6-838c-b12586dba146", 00:08:22.877 "assigned_rate_limits": { 00:08:22.877 "rw_ios_per_sec": 0, 00:08:22.877 "rw_mbytes_per_sec": 0, 00:08:22.877 "r_mbytes_per_sec": 0, 00:08:22.877 "w_mbytes_per_sec": 0 00:08:22.877 }, 00:08:22.877 "claimed": true, 00:08:22.877 "claim_type": "exclusive_write", 00:08:22.877 "zoned": false, 00:08:22.877 "supported_io_types": { 00:08:22.877 "read": true, 00:08:22.877 "write": true, 00:08:22.877 "unmap": true, 00:08:22.877 "flush": true, 00:08:22.877 "reset": true, 00:08:22.877 "nvme_admin": false, 00:08:22.877 "nvme_io": false, 00:08:22.877 "nvme_io_md": false, 00:08:22.877 "write_zeroes": true, 00:08:22.877 "zcopy": true, 00:08:22.877 "get_zone_info": false, 00:08:22.877 "zone_management": false, 00:08:22.877 "zone_append": false, 00:08:22.877 "compare": false, 00:08:22.877 "compare_and_write": false, 00:08:22.877 "abort": true, 00:08:22.877 "seek_hole": false, 00:08:22.877 "seek_data": false, 00:08:22.877 "copy": true, 00:08:22.877 "nvme_iov_md": false 00:08:22.877 }, 00:08:22.877 "memory_domains": [ 00:08:22.877 { 00:08:22.877 "dma_device_id": "system", 00:08:22.877 "dma_device_type": 1 00:08:22.877 }, 00:08:22.877 { 00:08:22.877 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:22.877 "dma_device_type": 2 00:08:22.877 } 00:08:22.877 ], 00:08:22.877 "driver_specific": {} 00:08:22.877 } 00:08:22.877 ] 00:08:22.877 15:16:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.877 15:16:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:22.877 15:16:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:22.877 15:16:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:22.877 15:16:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:22.877 15:16:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:22.877 15:16:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:22.877 15:16:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:22.877 15:16:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:22.877 15:16:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:22.877 15:16:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:22.877 15:16:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:22.877 15:16:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:22.877 15:16:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.877 15:16:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:22.877 15:16:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.877 15:16:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.877 15:16:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:22.877 "name": "Existed_Raid", 00:08:22.877 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:22.877 "strip_size_kb": 64, 00:08:22.877 "state": "configuring", 00:08:22.877 "raid_level": "raid0", 00:08:22.877 "superblock": false, 00:08:22.877 "num_base_bdevs": 3, 00:08:22.877 "num_base_bdevs_discovered": 1, 00:08:22.877 "num_base_bdevs_operational": 3, 00:08:22.877 "base_bdevs_list": [ 00:08:22.877 { 00:08:22.877 "name": "BaseBdev1", 00:08:22.877 "uuid": "dd0927d1-cf27-4aa6-838c-b12586dba146", 00:08:22.877 "is_configured": true, 00:08:22.877 "data_offset": 0, 00:08:22.877 "data_size": 65536 00:08:22.877 }, 00:08:22.877 { 00:08:22.877 "name": "BaseBdev2", 00:08:22.877 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:22.877 "is_configured": false, 00:08:22.877 "data_offset": 0, 00:08:22.877 "data_size": 0 00:08:22.877 }, 00:08:22.877 { 00:08:22.877 "name": "BaseBdev3", 00:08:22.877 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:22.877 "is_configured": false, 00:08:22.877 "data_offset": 0, 00:08:22.877 "data_size": 0 00:08:22.877 } 00:08:22.877 ] 00:08:22.877 }' 00:08:22.877 15:16:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:22.877 15:16:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.445 15:16:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:23.445 15:16:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.445 15:16:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.445 [2024-11-20 15:16:09.700066] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:23.445 [2024-11-20 15:16:09.700123] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:23.445 15:16:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.445 15:16:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:23.445 15:16:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.445 15:16:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.445 [2024-11-20 15:16:09.712097] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:23.445 [2024-11-20 15:16:09.714417] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:23.445 [2024-11-20 15:16:09.714465] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:23.445 [2024-11-20 15:16:09.714478] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:23.445 [2024-11-20 15:16:09.714491] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:23.445 15:16:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.445 15:16:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:23.445 15:16:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:23.445 15:16:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:23.445 15:16:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:23.445 15:16:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:23.445 15:16:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:23.445 15:16:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:23.445 15:16:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:23.445 15:16:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:23.445 15:16:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:23.445 15:16:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:23.445 15:16:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:23.445 15:16:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:23.445 15:16:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:23.445 15:16:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.445 15:16:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.445 15:16:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.445 15:16:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:23.445 "name": "Existed_Raid", 00:08:23.445 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:23.445 "strip_size_kb": 64, 00:08:23.445 "state": "configuring", 00:08:23.445 "raid_level": "raid0", 00:08:23.445 "superblock": false, 00:08:23.445 "num_base_bdevs": 3, 00:08:23.445 "num_base_bdevs_discovered": 1, 00:08:23.445 "num_base_bdevs_operational": 3, 00:08:23.445 "base_bdevs_list": [ 00:08:23.445 { 00:08:23.445 "name": "BaseBdev1", 00:08:23.445 "uuid": "dd0927d1-cf27-4aa6-838c-b12586dba146", 00:08:23.445 "is_configured": true, 00:08:23.445 "data_offset": 0, 00:08:23.445 "data_size": 65536 00:08:23.445 }, 00:08:23.445 { 00:08:23.445 "name": "BaseBdev2", 00:08:23.445 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:23.445 "is_configured": false, 00:08:23.445 "data_offset": 0, 00:08:23.445 "data_size": 0 00:08:23.445 }, 00:08:23.445 { 00:08:23.445 "name": "BaseBdev3", 00:08:23.445 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:23.445 "is_configured": false, 00:08:23.445 "data_offset": 0, 00:08:23.445 "data_size": 0 00:08:23.445 } 00:08:23.445 ] 00:08:23.445 }' 00:08:23.445 15:16:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:23.445 15:16:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.704 15:16:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:23.704 15:16:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.704 15:16:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.962 [2024-11-20 15:16:10.208962] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:23.962 BaseBdev2 00:08:23.962 15:16:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.962 15:16:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:23.962 15:16:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:23.962 15:16:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:23.962 15:16:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:23.962 15:16:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:23.962 15:16:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:23.962 15:16:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:23.962 15:16:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.962 15:16:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.963 15:16:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.963 15:16:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:23.963 15:16:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.963 15:16:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.963 [ 00:08:23.963 { 00:08:23.963 "name": "BaseBdev2", 00:08:23.963 "aliases": [ 00:08:23.963 "32f14a7d-b838-487e-9d29-490b97f1ba72" 00:08:23.963 ], 00:08:23.963 "product_name": "Malloc disk", 00:08:23.963 "block_size": 512, 00:08:23.963 "num_blocks": 65536, 00:08:23.963 "uuid": "32f14a7d-b838-487e-9d29-490b97f1ba72", 00:08:23.963 "assigned_rate_limits": { 00:08:23.963 "rw_ios_per_sec": 0, 00:08:23.963 "rw_mbytes_per_sec": 0, 00:08:23.963 "r_mbytes_per_sec": 0, 00:08:23.963 "w_mbytes_per_sec": 0 00:08:23.963 }, 00:08:23.963 "claimed": true, 00:08:23.963 "claim_type": "exclusive_write", 00:08:23.963 "zoned": false, 00:08:23.963 "supported_io_types": { 00:08:23.963 "read": true, 00:08:23.963 "write": true, 00:08:23.963 "unmap": true, 00:08:23.963 "flush": true, 00:08:23.963 "reset": true, 00:08:23.963 "nvme_admin": false, 00:08:23.963 "nvme_io": false, 00:08:23.963 "nvme_io_md": false, 00:08:23.963 "write_zeroes": true, 00:08:23.963 "zcopy": true, 00:08:23.963 "get_zone_info": false, 00:08:23.963 "zone_management": false, 00:08:23.963 "zone_append": false, 00:08:23.963 "compare": false, 00:08:23.963 "compare_and_write": false, 00:08:23.963 "abort": true, 00:08:23.963 "seek_hole": false, 00:08:23.963 "seek_data": false, 00:08:23.963 "copy": true, 00:08:23.963 "nvme_iov_md": false 00:08:23.963 }, 00:08:23.963 "memory_domains": [ 00:08:23.963 { 00:08:23.963 "dma_device_id": "system", 00:08:23.963 "dma_device_type": 1 00:08:23.963 }, 00:08:23.963 { 00:08:23.963 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:23.963 "dma_device_type": 2 00:08:23.963 } 00:08:23.963 ], 00:08:23.963 "driver_specific": {} 00:08:23.963 } 00:08:23.963 ] 00:08:23.963 15:16:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.963 15:16:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:23.963 15:16:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:23.963 15:16:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:23.963 15:16:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:23.963 15:16:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:23.963 15:16:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:23.963 15:16:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:23.963 15:16:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:23.963 15:16:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:23.963 15:16:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:23.963 15:16:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:23.963 15:16:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:23.963 15:16:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:23.963 15:16:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:23.963 15:16:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:23.963 15:16:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.963 15:16:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.963 15:16:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.963 15:16:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:23.963 "name": "Existed_Raid", 00:08:23.963 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:23.963 "strip_size_kb": 64, 00:08:23.963 "state": "configuring", 00:08:23.963 "raid_level": "raid0", 00:08:23.963 "superblock": false, 00:08:23.963 "num_base_bdevs": 3, 00:08:23.963 "num_base_bdevs_discovered": 2, 00:08:23.963 "num_base_bdevs_operational": 3, 00:08:23.963 "base_bdevs_list": [ 00:08:23.963 { 00:08:23.963 "name": "BaseBdev1", 00:08:23.963 "uuid": "dd0927d1-cf27-4aa6-838c-b12586dba146", 00:08:23.963 "is_configured": true, 00:08:23.963 "data_offset": 0, 00:08:23.963 "data_size": 65536 00:08:23.963 }, 00:08:23.963 { 00:08:23.963 "name": "BaseBdev2", 00:08:23.963 "uuid": "32f14a7d-b838-487e-9d29-490b97f1ba72", 00:08:23.963 "is_configured": true, 00:08:23.963 "data_offset": 0, 00:08:23.963 "data_size": 65536 00:08:23.963 }, 00:08:23.963 { 00:08:23.963 "name": "BaseBdev3", 00:08:23.963 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:23.963 "is_configured": false, 00:08:23.963 "data_offset": 0, 00:08:23.963 "data_size": 0 00:08:23.963 } 00:08:23.963 ] 00:08:23.963 }' 00:08:23.963 15:16:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:23.963 15:16:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.626 15:16:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:24.626 15:16:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.626 15:16:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.626 [2024-11-20 15:16:10.757524] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:24.626 [2024-11-20 15:16:10.757574] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:24.626 [2024-11-20 15:16:10.757591] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:08:24.626 [2024-11-20 15:16:10.757921] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:24.626 [2024-11-20 15:16:10.758105] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:24.626 [2024-11-20 15:16:10.758117] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:24.626 [2024-11-20 15:16:10.758398] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:24.626 BaseBdev3 00:08:24.626 15:16:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.626 15:16:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:08:24.626 15:16:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:24.626 15:16:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:24.626 15:16:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:24.626 15:16:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:24.626 15:16:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:24.626 15:16:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:24.626 15:16:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.626 15:16:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.626 15:16:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.626 15:16:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:24.626 15:16:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.626 15:16:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.626 [ 00:08:24.626 { 00:08:24.626 "name": "BaseBdev3", 00:08:24.626 "aliases": [ 00:08:24.626 "6f28f7f0-79ee-4e49-a4d5-984d205ae3ba" 00:08:24.626 ], 00:08:24.626 "product_name": "Malloc disk", 00:08:24.626 "block_size": 512, 00:08:24.626 "num_blocks": 65536, 00:08:24.627 "uuid": "6f28f7f0-79ee-4e49-a4d5-984d205ae3ba", 00:08:24.627 "assigned_rate_limits": { 00:08:24.627 "rw_ios_per_sec": 0, 00:08:24.627 "rw_mbytes_per_sec": 0, 00:08:24.627 "r_mbytes_per_sec": 0, 00:08:24.627 "w_mbytes_per_sec": 0 00:08:24.627 }, 00:08:24.627 "claimed": true, 00:08:24.627 "claim_type": "exclusive_write", 00:08:24.627 "zoned": false, 00:08:24.627 "supported_io_types": { 00:08:24.627 "read": true, 00:08:24.627 "write": true, 00:08:24.627 "unmap": true, 00:08:24.627 "flush": true, 00:08:24.627 "reset": true, 00:08:24.627 "nvme_admin": false, 00:08:24.627 "nvme_io": false, 00:08:24.627 "nvme_io_md": false, 00:08:24.627 "write_zeroes": true, 00:08:24.627 "zcopy": true, 00:08:24.627 "get_zone_info": false, 00:08:24.627 "zone_management": false, 00:08:24.627 "zone_append": false, 00:08:24.627 "compare": false, 00:08:24.627 "compare_and_write": false, 00:08:24.627 "abort": true, 00:08:24.627 "seek_hole": false, 00:08:24.627 "seek_data": false, 00:08:24.627 "copy": true, 00:08:24.627 "nvme_iov_md": false 00:08:24.627 }, 00:08:24.627 "memory_domains": [ 00:08:24.627 { 00:08:24.627 "dma_device_id": "system", 00:08:24.627 "dma_device_type": 1 00:08:24.627 }, 00:08:24.627 { 00:08:24.627 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:24.627 "dma_device_type": 2 00:08:24.627 } 00:08:24.627 ], 00:08:24.627 "driver_specific": {} 00:08:24.627 } 00:08:24.627 ] 00:08:24.627 15:16:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.627 15:16:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:24.627 15:16:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:24.627 15:16:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:24.627 15:16:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:08:24.627 15:16:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:24.627 15:16:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:24.627 15:16:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:24.627 15:16:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:24.627 15:16:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:24.627 15:16:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:24.627 15:16:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:24.627 15:16:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:24.627 15:16:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:24.627 15:16:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:24.627 15:16:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:24.627 15:16:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.627 15:16:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.627 15:16:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.627 15:16:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:24.627 "name": "Existed_Raid", 00:08:24.627 "uuid": "77213c92-3b52-466a-9fc5-b8cc0cc3b5ee", 00:08:24.627 "strip_size_kb": 64, 00:08:24.627 "state": "online", 00:08:24.627 "raid_level": "raid0", 00:08:24.627 "superblock": false, 00:08:24.627 "num_base_bdevs": 3, 00:08:24.627 "num_base_bdevs_discovered": 3, 00:08:24.627 "num_base_bdevs_operational": 3, 00:08:24.627 "base_bdevs_list": [ 00:08:24.627 { 00:08:24.627 "name": "BaseBdev1", 00:08:24.627 "uuid": "dd0927d1-cf27-4aa6-838c-b12586dba146", 00:08:24.627 "is_configured": true, 00:08:24.627 "data_offset": 0, 00:08:24.627 "data_size": 65536 00:08:24.627 }, 00:08:24.627 { 00:08:24.627 "name": "BaseBdev2", 00:08:24.627 "uuid": "32f14a7d-b838-487e-9d29-490b97f1ba72", 00:08:24.627 "is_configured": true, 00:08:24.627 "data_offset": 0, 00:08:24.627 "data_size": 65536 00:08:24.627 }, 00:08:24.627 { 00:08:24.627 "name": "BaseBdev3", 00:08:24.627 "uuid": "6f28f7f0-79ee-4e49-a4d5-984d205ae3ba", 00:08:24.627 "is_configured": true, 00:08:24.627 "data_offset": 0, 00:08:24.627 "data_size": 65536 00:08:24.627 } 00:08:24.627 ] 00:08:24.627 }' 00:08:24.627 15:16:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:24.627 15:16:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.887 15:16:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:24.887 15:16:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:24.887 15:16:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:24.887 15:16:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:24.887 15:16:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:24.887 15:16:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:24.887 15:16:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:24.887 15:16:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:24.887 15:16:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.887 15:16:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.887 [2024-11-20 15:16:11.233296] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:24.887 15:16:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.887 15:16:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:24.887 "name": "Existed_Raid", 00:08:24.887 "aliases": [ 00:08:24.887 "77213c92-3b52-466a-9fc5-b8cc0cc3b5ee" 00:08:24.887 ], 00:08:24.887 "product_name": "Raid Volume", 00:08:24.887 "block_size": 512, 00:08:24.887 "num_blocks": 196608, 00:08:24.887 "uuid": "77213c92-3b52-466a-9fc5-b8cc0cc3b5ee", 00:08:24.887 "assigned_rate_limits": { 00:08:24.887 "rw_ios_per_sec": 0, 00:08:24.887 "rw_mbytes_per_sec": 0, 00:08:24.887 "r_mbytes_per_sec": 0, 00:08:24.887 "w_mbytes_per_sec": 0 00:08:24.887 }, 00:08:24.887 "claimed": false, 00:08:24.887 "zoned": false, 00:08:24.887 "supported_io_types": { 00:08:24.887 "read": true, 00:08:24.887 "write": true, 00:08:24.887 "unmap": true, 00:08:24.887 "flush": true, 00:08:24.887 "reset": true, 00:08:24.887 "nvme_admin": false, 00:08:24.887 "nvme_io": false, 00:08:24.887 "nvme_io_md": false, 00:08:24.887 "write_zeroes": true, 00:08:24.887 "zcopy": false, 00:08:24.887 "get_zone_info": false, 00:08:24.887 "zone_management": false, 00:08:24.887 "zone_append": false, 00:08:24.887 "compare": false, 00:08:24.887 "compare_and_write": false, 00:08:24.887 "abort": false, 00:08:24.887 "seek_hole": false, 00:08:24.887 "seek_data": false, 00:08:24.887 "copy": false, 00:08:24.887 "nvme_iov_md": false 00:08:24.887 }, 00:08:24.887 "memory_domains": [ 00:08:24.887 { 00:08:24.887 "dma_device_id": "system", 00:08:24.887 "dma_device_type": 1 00:08:24.887 }, 00:08:24.887 { 00:08:24.887 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:24.887 "dma_device_type": 2 00:08:24.887 }, 00:08:24.887 { 00:08:24.887 "dma_device_id": "system", 00:08:24.887 "dma_device_type": 1 00:08:24.887 }, 00:08:24.887 { 00:08:24.887 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:24.887 "dma_device_type": 2 00:08:24.887 }, 00:08:24.887 { 00:08:24.887 "dma_device_id": "system", 00:08:24.887 "dma_device_type": 1 00:08:24.887 }, 00:08:24.887 { 00:08:24.887 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:24.887 "dma_device_type": 2 00:08:24.887 } 00:08:24.887 ], 00:08:24.887 "driver_specific": { 00:08:24.887 "raid": { 00:08:24.887 "uuid": "77213c92-3b52-466a-9fc5-b8cc0cc3b5ee", 00:08:24.887 "strip_size_kb": 64, 00:08:24.887 "state": "online", 00:08:24.887 "raid_level": "raid0", 00:08:24.887 "superblock": false, 00:08:24.887 "num_base_bdevs": 3, 00:08:24.887 "num_base_bdevs_discovered": 3, 00:08:24.887 "num_base_bdevs_operational": 3, 00:08:24.887 "base_bdevs_list": [ 00:08:24.887 { 00:08:24.887 "name": "BaseBdev1", 00:08:24.887 "uuid": "dd0927d1-cf27-4aa6-838c-b12586dba146", 00:08:24.887 "is_configured": true, 00:08:24.887 "data_offset": 0, 00:08:24.887 "data_size": 65536 00:08:24.887 }, 00:08:24.887 { 00:08:24.887 "name": "BaseBdev2", 00:08:24.887 "uuid": "32f14a7d-b838-487e-9d29-490b97f1ba72", 00:08:24.887 "is_configured": true, 00:08:24.887 "data_offset": 0, 00:08:24.887 "data_size": 65536 00:08:24.887 }, 00:08:24.887 { 00:08:24.887 "name": "BaseBdev3", 00:08:24.887 "uuid": "6f28f7f0-79ee-4e49-a4d5-984d205ae3ba", 00:08:24.887 "is_configured": true, 00:08:24.887 "data_offset": 0, 00:08:24.887 "data_size": 65536 00:08:24.887 } 00:08:24.887 ] 00:08:24.887 } 00:08:24.887 } 00:08:24.887 }' 00:08:24.887 15:16:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:24.887 15:16:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:24.887 BaseBdev2 00:08:24.887 BaseBdev3' 00:08:24.887 15:16:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:24.887 15:16:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:24.888 15:16:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:25.147 15:16:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:25.147 15:16:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:25.147 15:16:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.147 15:16:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.147 15:16:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.147 15:16:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:25.147 15:16:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:25.147 15:16:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:25.147 15:16:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:25.147 15:16:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.147 15:16:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.147 15:16:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:25.147 15:16:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.147 15:16:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:25.147 15:16:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:25.147 15:16:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:25.147 15:16:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:25.147 15:16:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.148 15:16:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.148 15:16:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:25.148 15:16:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.148 15:16:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:25.148 15:16:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:25.148 15:16:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:25.148 15:16:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.148 15:16:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.148 [2024-11-20 15:16:11.532603] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:25.148 [2024-11-20 15:16:11.532776] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:25.148 [2024-11-20 15:16:11.532864] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:25.406 15:16:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.406 15:16:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:25.406 15:16:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:08:25.406 15:16:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:25.406 15:16:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:25.406 15:16:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:25.406 15:16:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:08:25.406 15:16:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:25.406 15:16:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:25.406 15:16:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:25.406 15:16:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:25.406 15:16:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:25.406 15:16:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:25.406 15:16:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:25.406 15:16:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:25.406 15:16:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:25.406 15:16:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:25.406 15:16:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:25.406 15:16:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.406 15:16:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.406 15:16:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.406 15:16:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:25.406 "name": "Existed_Raid", 00:08:25.406 "uuid": "77213c92-3b52-466a-9fc5-b8cc0cc3b5ee", 00:08:25.406 "strip_size_kb": 64, 00:08:25.406 "state": "offline", 00:08:25.406 "raid_level": "raid0", 00:08:25.406 "superblock": false, 00:08:25.406 "num_base_bdevs": 3, 00:08:25.406 "num_base_bdevs_discovered": 2, 00:08:25.406 "num_base_bdevs_operational": 2, 00:08:25.406 "base_bdevs_list": [ 00:08:25.406 { 00:08:25.406 "name": null, 00:08:25.406 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:25.406 "is_configured": false, 00:08:25.406 "data_offset": 0, 00:08:25.406 "data_size": 65536 00:08:25.406 }, 00:08:25.406 { 00:08:25.406 "name": "BaseBdev2", 00:08:25.406 "uuid": "32f14a7d-b838-487e-9d29-490b97f1ba72", 00:08:25.406 "is_configured": true, 00:08:25.406 "data_offset": 0, 00:08:25.406 "data_size": 65536 00:08:25.406 }, 00:08:25.406 { 00:08:25.406 "name": "BaseBdev3", 00:08:25.406 "uuid": "6f28f7f0-79ee-4e49-a4d5-984d205ae3ba", 00:08:25.406 "is_configured": true, 00:08:25.406 "data_offset": 0, 00:08:25.406 "data_size": 65536 00:08:25.406 } 00:08:25.406 ] 00:08:25.406 }' 00:08:25.406 15:16:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:25.406 15:16:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.664 15:16:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:25.664 15:16:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:25.664 15:16:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:25.664 15:16:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.664 15:16:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.664 15:16:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:25.664 15:16:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.664 15:16:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:25.665 15:16:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:25.665 15:16:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:25.665 15:16:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.665 15:16:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.665 [2024-11-20 15:16:12.113479] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:25.924 15:16:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.924 15:16:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:25.924 15:16:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:25.924 15:16:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:25.924 15:16:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:25.924 15:16:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.924 15:16:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.924 15:16:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.924 15:16:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:25.924 15:16:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:25.924 15:16:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:08:25.924 15:16:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.924 15:16:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.924 [2024-11-20 15:16:12.270329] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:25.924 [2024-11-20 15:16:12.270517] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:25.924 15:16:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.924 15:16:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:25.924 15:16:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:25.924 15:16:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:25.924 15:16:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.924 15:16:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:25.924 15:16:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.924 15:16:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.184 15:16:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:26.184 15:16:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:26.184 15:16:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:08:26.184 15:16:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:08:26.184 15:16:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:26.184 15:16:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:26.184 15:16:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.184 15:16:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.184 BaseBdev2 00:08:26.184 15:16:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.184 15:16:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:08:26.184 15:16:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:26.184 15:16:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:26.184 15:16:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:26.184 15:16:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:26.184 15:16:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:26.184 15:16:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:26.184 15:16:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.184 15:16:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.184 15:16:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.184 15:16:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:26.184 15:16:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.184 15:16:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.184 [ 00:08:26.184 { 00:08:26.184 "name": "BaseBdev2", 00:08:26.184 "aliases": [ 00:08:26.184 "12d19111-c8c1-45a4-8305-48e663579a55" 00:08:26.184 ], 00:08:26.184 "product_name": "Malloc disk", 00:08:26.184 "block_size": 512, 00:08:26.184 "num_blocks": 65536, 00:08:26.185 "uuid": "12d19111-c8c1-45a4-8305-48e663579a55", 00:08:26.185 "assigned_rate_limits": { 00:08:26.185 "rw_ios_per_sec": 0, 00:08:26.185 "rw_mbytes_per_sec": 0, 00:08:26.185 "r_mbytes_per_sec": 0, 00:08:26.185 "w_mbytes_per_sec": 0 00:08:26.185 }, 00:08:26.185 "claimed": false, 00:08:26.185 "zoned": false, 00:08:26.185 "supported_io_types": { 00:08:26.185 "read": true, 00:08:26.185 "write": true, 00:08:26.185 "unmap": true, 00:08:26.185 "flush": true, 00:08:26.185 "reset": true, 00:08:26.185 "nvme_admin": false, 00:08:26.185 "nvme_io": false, 00:08:26.185 "nvme_io_md": false, 00:08:26.185 "write_zeroes": true, 00:08:26.185 "zcopy": true, 00:08:26.185 "get_zone_info": false, 00:08:26.185 "zone_management": false, 00:08:26.185 "zone_append": false, 00:08:26.185 "compare": false, 00:08:26.185 "compare_and_write": false, 00:08:26.185 "abort": true, 00:08:26.185 "seek_hole": false, 00:08:26.185 "seek_data": false, 00:08:26.185 "copy": true, 00:08:26.185 "nvme_iov_md": false 00:08:26.185 }, 00:08:26.185 "memory_domains": [ 00:08:26.185 { 00:08:26.185 "dma_device_id": "system", 00:08:26.185 "dma_device_type": 1 00:08:26.185 }, 00:08:26.185 { 00:08:26.185 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:26.185 "dma_device_type": 2 00:08:26.185 } 00:08:26.185 ], 00:08:26.185 "driver_specific": {} 00:08:26.185 } 00:08:26.185 ] 00:08:26.185 15:16:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.185 15:16:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:26.185 15:16:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:26.185 15:16:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:26.185 15:16:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:26.185 15:16:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.185 15:16:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.185 BaseBdev3 00:08:26.185 15:16:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.185 15:16:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:08:26.185 15:16:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:26.185 15:16:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:26.185 15:16:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:26.185 15:16:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:26.185 15:16:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:26.185 15:16:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:26.185 15:16:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.185 15:16:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.185 15:16:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.185 15:16:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:26.185 15:16:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.185 15:16:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.185 [ 00:08:26.185 { 00:08:26.185 "name": "BaseBdev3", 00:08:26.185 "aliases": [ 00:08:26.185 "2dc5f320-0eea-4a94-926c-305b8dd1db7b" 00:08:26.185 ], 00:08:26.185 "product_name": "Malloc disk", 00:08:26.185 "block_size": 512, 00:08:26.185 "num_blocks": 65536, 00:08:26.185 "uuid": "2dc5f320-0eea-4a94-926c-305b8dd1db7b", 00:08:26.185 "assigned_rate_limits": { 00:08:26.185 "rw_ios_per_sec": 0, 00:08:26.185 "rw_mbytes_per_sec": 0, 00:08:26.185 "r_mbytes_per_sec": 0, 00:08:26.185 "w_mbytes_per_sec": 0 00:08:26.185 }, 00:08:26.185 "claimed": false, 00:08:26.185 "zoned": false, 00:08:26.185 "supported_io_types": { 00:08:26.185 "read": true, 00:08:26.185 "write": true, 00:08:26.185 "unmap": true, 00:08:26.185 "flush": true, 00:08:26.185 "reset": true, 00:08:26.185 "nvme_admin": false, 00:08:26.185 "nvme_io": false, 00:08:26.185 "nvme_io_md": false, 00:08:26.185 "write_zeroes": true, 00:08:26.185 "zcopy": true, 00:08:26.185 "get_zone_info": false, 00:08:26.185 "zone_management": false, 00:08:26.185 "zone_append": false, 00:08:26.185 "compare": false, 00:08:26.185 "compare_and_write": false, 00:08:26.185 "abort": true, 00:08:26.185 "seek_hole": false, 00:08:26.185 "seek_data": false, 00:08:26.185 "copy": true, 00:08:26.185 "nvme_iov_md": false 00:08:26.185 }, 00:08:26.185 "memory_domains": [ 00:08:26.185 { 00:08:26.185 "dma_device_id": "system", 00:08:26.185 "dma_device_type": 1 00:08:26.185 }, 00:08:26.185 { 00:08:26.185 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:26.185 "dma_device_type": 2 00:08:26.185 } 00:08:26.185 ], 00:08:26.185 "driver_specific": {} 00:08:26.185 } 00:08:26.185 ] 00:08:26.185 15:16:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.185 15:16:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:26.185 15:16:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:26.185 15:16:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:26.185 15:16:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:26.185 15:16:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.185 15:16:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.185 [2024-11-20 15:16:12.629330] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:26.185 [2024-11-20 15:16:12.629385] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:26.185 [2024-11-20 15:16:12.629415] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:26.185 [2024-11-20 15:16:12.631735] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:26.185 15:16:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.185 15:16:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:26.185 15:16:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:26.185 15:16:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:26.185 15:16:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:26.185 15:16:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:26.185 15:16:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:26.185 15:16:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:26.185 15:16:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:26.185 15:16:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:26.185 15:16:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:26.185 15:16:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:26.185 15:16:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:26.185 15:16:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.185 15:16:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.185 15:16:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.445 15:16:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:26.445 "name": "Existed_Raid", 00:08:26.445 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:26.445 "strip_size_kb": 64, 00:08:26.445 "state": "configuring", 00:08:26.445 "raid_level": "raid0", 00:08:26.445 "superblock": false, 00:08:26.445 "num_base_bdevs": 3, 00:08:26.445 "num_base_bdevs_discovered": 2, 00:08:26.445 "num_base_bdevs_operational": 3, 00:08:26.445 "base_bdevs_list": [ 00:08:26.445 { 00:08:26.445 "name": "BaseBdev1", 00:08:26.445 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:26.445 "is_configured": false, 00:08:26.445 "data_offset": 0, 00:08:26.445 "data_size": 0 00:08:26.445 }, 00:08:26.445 { 00:08:26.445 "name": "BaseBdev2", 00:08:26.445 "uuid": "12d19111-c8c1-45a4-8305-48e663579a55", 00:08:26.445 "is_configured": true, 00:08:26.445 "data_offset": 0, 00:08:26.445 "data_size": 65536 00:08:26.445 }, 00:08:26.445 { 00:08:26.445 "name": "BaseBdev3", 00:08:26.445 "uuid": "2dc5f320-0eea-4a94-926c-305b8dd1db7b", 00:08:26.445 "is_configured": true, 00:08:26.445 "data_offset": 0, 00:08:26.445 "data_size": 65536 00:08:26.445 } 00:08:26.445 ] 00:08:26.445 }' 00:08:26.445 15:16:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:26.445 15:16:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.704 15:16:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:08:26.704 15:16:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.704 15:16:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.704 [2024-11-20 15:16:13.096733] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:26.704 15:16:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.704 15:16:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:26.704 15:16:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:26.704 15:16:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:26.704 15:16:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:26.704 15:16:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:26.704 15:16:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:26.704 15:16:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:26.704 15:16:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:26.704 15:16:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:26.704 15:16:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:26.704 15:16:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:26.704 15:16:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.704 15:16:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.704 15:16:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:26.704 15:16:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.704 15:16:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:26.704 "name": "Existed_Raid", 00:08:26.704 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:26.704 "strip_size_kb": 64, 00:08:26.704 "state": "configuring", 00:08:26.704 "raid_level": "raid0", 00:08:26.704 "superblock": false, 00:08:26.704 "num_base_bdevs": 3, 00:08:26.704 "num_base_bdevs_discovered": 1, 00:08:26.704 "num_base_bdevs_operational": 3, 00:08:26.704 "base_bdevs_list": [ 00:08:26.704 { 00:08:26.704 "name": "BaseBdev1", 00:08:26.704 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:26.705 "is_configured": false, 00:08:26.705 "data_offset": 0, 00:08:26.705 "data_size": 0 00:08:26.705 }, 00:08:26.705 { 00:08:26.705 "name": null, 00:08:26.705 "uuid": "12d19111-c8c1-45a4-8305-48e663579a55", 00:08:26.705 "is_configured": false, 00:08:26.705 "data_offset": 0, 00:08:26.705 "data_size": 65536 00:08:26.705 }, 00:08:26.705 { 00:08:26.705 "name": "BaseBdev3", 00:08:26.705 "uuid": "2dc5f320-0eea-4a94-926c-305b8dd1db7b", 00:08:26.705 "is_configured": true, 00:08:26.705 "data_offset": 0, 00:08:26.705 "data_size": 65536 00:08:26.705 } 00:08:26.705 ] 00:08:26.705 }' 00:08:26.705 15:16:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:26.705 15:16:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.274 15:16:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:27.274 15:16:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:27.274 15:16:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.274 15:16:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.274 15:16:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.274 15:16:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:08:27.274 15:16:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:27.274 15:16:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.274 15:16:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.274 [2024-11-20 15:16:13.584714] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:27.274 BaseBdev1 00:08:27.274 15:16:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.274 15:16:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:08:27.274 15:16:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:27.274 15:16:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:27.274 15:16:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:27.274 15:16:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:27.274 15:16:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:27.274 15:16:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:27.274 15:16:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.274 15:16:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.274 15:16:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.274 15:16:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:27.274 15:16:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.274 15:16:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.274 [ 00:08:27.274 { 00:08:27.274 "name": "BaseBdev1", 00:08:27.274 "aliases": [ 00:08:27.274 "3cfcf330-c2a1-47c3-bf84-71855ececcdb" 00:08:27.274 ], 00:08:27.274 "product_name": "Malloc disk", 00:08:27.274 "block_size": 512, 00:08:27.274 "num_blocks": 65536, 00:08:27.274 "uuid": "3cfcf330-c2a1-47c3-bf84-71855ececcdb", 00:08:27.274 "assigned_rate_limits": { 00:08:27.274 "rw_ios_per_sec": 0, 00:08:27.274 "rw_mbytes_per_sec": 0, 00:08:27.274 "r_mbytes_per_sec": 0, 00:08:27.274 "w_mbytes_per_sec": 0 00:08:27.274 }, 00:08:27.274 "claimed": true, 00:08:27.274 "claim_type": "exclusive_write", 00:08:27.274 "zoned": false, 00:08:27.274 "supported_io_types": { 00:08:27.274 "read": true, 00:08:27.274 "write": true, 00:08:27.274 "unmap": true, 00:08:27.274 "flush": true, 00:08:27.274 "reset": true, 00:08:27.274 "nvme_admin": false, 00:08:27.274 "nvme_io": false, 00:08:27.274 "nvme_io_md": false, 00:08:27.274 "write_zeroes": true, 00:08:27.274 "zcopy": true, 00:08:27.274 "get_zone_info": false, 00:08:27.274 "zone_management": false, 00:08:27.274 "zone_append": false, 00:08:27.274 "compare": false, 00:08:27.274 "compare_and_write": false, 00:08:27.274 "abort": true, 00:08:27.274 "seek_hole": false, 00:08:27.274 "seek_data": false, 00:08:27.274 "copy": true, 00:08:27.274 "nvme_iov_md": false 00:08:27.274 }, 00:08:27.274 "memory_domains": [ 00:08:27.274 { 00:08:27.274 "dma_device_id": "system", 00:08:27.274 "dma_device_type": 1 00:08:27.274 }, 00:08:27.274 { 00:08:27.274 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:27.274 "dma_device_type": 2 00:08:27.274 } 00:08:27.274 ], 00:08:27.274 "driver_specific": {} 00:08:27.274 } 00:08:27.274 ] 00:08:27.274 15:16:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.274 15:16:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:27.274 15:16:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:27.274 15:16:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:27.274 15:16:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:27.274 15:16:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:27.274 15:16:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:27.274 15:16:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:27.274 15:16:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:27.274 15:16:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:27.274 15:16:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:27.274 15:16:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:27.274 15:16:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:27.274 15:16:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:27.274 15:16:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.274 15:16:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.274 15:16:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.274 15:16:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:27.274 "name": "Existed_Raid", 00:08:27.274 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:27.274 "strip_size_kb": 64, 00:08:27.274 "state": "configuring", 00:08:27.274 "raid_level": "raid0", 00:08:27.274 "superblock": false, 00:08:27.274 "num_base_bdevs": 3, 00:08:27.274 "num_base_bdevs_discovered": 2, 00:08:27.274 "num_base_bdevs_operational": 3, 00:08:27.274 "base_bdevs_list": [ 00:08:27.274 { 00:08:27.274 "name": "BaseBdev1", 00:08:27.274 "uuid": "3cfcf330-c2a1-47c3-bf84-71855ececcdb", 00:08:27.274 "is_configured": true, 00:08:27.274 "data_offset": 0, 00:08:27.274 "data_size": 65536 00:08:27.274 }, 00:08:27.274 { 00:08:27.274 "name": null, 00:08:27.274 "uuid": "12d19111-c8c1-45a4-8305-48e663579a55", 00:08:27.274 "is_configured": false, 00:08:27.274 "data_offset": 0, 00:08:27.274 "data_size": 65536 00:08:27.274 }, 00:08:27.274 { 00:08:27.274 "name": "BaseBdev3", 00:08:27.274 "uuid": "2dc5f320-0eea-4a94-926c-305b8dd1db7b", 00:08:27.274 "is_configured": true, 00:08:27.274 "data_offset": 0, 00:08:27.274 "data_size": 65536 00:08:27.274 } 00:08:27.274 ] 00:08:27.274 }' 00:08:27.274 15:16:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:27.274 15:16:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.842 15:16:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:27.842 15:16:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.843 15:16:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.843 15:16:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:27.843 15:16:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.843 15:16:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:08:27.843 15:16:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:08:27.843 15:16:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.843 15:16:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.843 [2024-11-20 15:16:14.080104] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:27.843 15:16:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.843 15:16:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:27.843 15:16:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:27.843 15:16:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:27.843 15:16:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:27.843 15:16:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:27.843 15:16:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:27.843 15:16:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:27.843 15:16:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:27.843 15:16:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:27.843 15:16:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:27.843 15:16:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:27.843 15:16:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:27.843 15:16:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.843 15:16:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.843 15:16:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.843 15:16:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:27.843 "name": "Existed_Raid", 00:08:27.843 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:27.843 "strip_size_kb": 64, 00:08:27.843 "state": "configuring", 00:08:27.843 "raid_level": "raid0", 00:08:27.843 "superblock": false, 00:08:27.843 "num_base_bdevs": 3, 00:08:27.843 "num_base_bdevs_discovered": 1, 00:08:27.843 "num_base_bdevs_operational": 3, 00:08:27.843 "base_bdevs_list": [ 00:08:27.843 { 00:08:27.843 "name": "BaseBdev1", 00:08:27.843 "uuid": "3cfcf330-c2a1-47c3-bf84-71855ececcdb", 00:08:27.843 "is_configured": true, 00:08:27.843 "data_offset": 0, 00:08:27.843 "data_size": 65536 00:08:27.843 }, 00:08:27.843 { 00:08:27.843 "name": null, 00:08:27.843 "uuid": "12d19111-c8c1-45a4-8305-48e663579a55", 00:08:27.843 "is_configured": false, 00:08:27.843 "data_offset": 0, 00:08:27.843 "data_size": 65536 00:08:27.843 }, 00:08:27.843 { 00:08:27.843 "name": null, 00:08:27.843 "uuid": "2dc5f320-0eea-4a94-926c-305b8dd1db7b", 00:08:27.843 "is_configured": false, 00:08:27.843 "data_offset": 0, 00:08:27.843 "data_size": 65536 00:08:27.843 } 00:08:27.843 ] 00:08:27.843 }' 00:08:27.843 15:16:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:27.843 15:16:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.103 15:16:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:28.103 15:16:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.103 15:16:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:28.103 15:16:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.103 15:16:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.103 15:16:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:08:28.103 15:16:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:08:28.103 15:16:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.103 15:16:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.103 [2024-11-20 15:16:14.571524] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:28.103 15:16:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.103 15:16:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:28.103 15:16:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:28.103 15:16:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:28.103 15:16:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:28.103 15:16:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:28.103 15:16:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:28.103 15:16:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:28.103 15:16:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:28.103 15:16:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:28.103 15:16:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:28.103 15:16:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:28.103 15:16:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:28.103 15:16:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.103 15:16:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.363 15:16:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.363 15:16:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:28.363 "name": "Existed_Raid", 00:08:28.363 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:28.363 "strip_size_kb": 64, 00:08:28.363 "state": "configuring", 00:08:28.363 "raid_level": "raid0", 00:08:28.363 "superblock": false, 00:08:28.363 "num_base_bdevs": 3, 00:08:28.363 "num_base_bdevs_discovered": 2, 00:08:28.363 "num_base_bdevs_operational": 3, 00:08:28.363 "base_bdevs_list": [ 00:08:28.363 { 00:08:28.363 "name": "BaseBdev1", 00:08:28.363 "uuid": "3cfcf330-c2a1-47c3-bf84-71855ececcdb", 00:08:28.363 "is_configured": true, 00:08:28.363 "data_offset": 0, 00:08:28.363 "data_size": 65536 00:08:28.363 }, 00:08:28.363 { 00:08:28.363 "name": null, 00:08:28.363 "uuid": "12d19111-c8c1-45a4-8305-48e663579a55", 00:08:28.363 "is_configured": false, 00:08:28.363 "data_offset": 0, 00:08:28.363 "data_size": 65536 00:08:28.363 }, 00:08:28.363 { 00:08:28.363 "name": "BaseBdev3", 00:08:28.363 "uuid": "2dc5f320-0eea-4a94-926c-305b8dd1db7b", 00:08:28.363 "is_configured": true, 00:08:28.363 "data_offset": 0, 00:08:28.363 "data_size": 65536 00:08:28.363 } 00:08:28.363 ] 00:08:28.363 }' 00:08:28.363 15:16:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:28.363 15:16:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.621 15:16:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:28.621 15:16:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:28.621 15:16:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.621 15:16:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.621 15:16:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.621 15:16:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:08:28.621 15:16:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:28.621 15:16:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.621 15:16:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.621 [2024-11-20 15:16:15.027411] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:28.879 15:16:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.879 15:16:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:28.879 15:16:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:28.879 15:16:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:28.879 15:16:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:28.879 15:16:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:28.879 15:16:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:28.879 15:16:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:28.879 15:16:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:28.879 15:16:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:28.879 15:16:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:28.879 15:16:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:28.879 15:16:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:28.879 15:16:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.879 15:16:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.879 15:16:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.879 15:16:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:28.879 "name": "Existed_Raid", 00:08:28.879 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:28.879 "strip_size_kb": 64, 00:08:28.879 "state": "configuring", 00:08:28.879 "raid_level": "raid0", 00:08:28.879 "superblock": false, 00:08:28.879 "num_base_bdevs": 3, 00:08:28.879 "num_base_bdevs_discovered": 1, 00:08:28.879 "num_base_bdevs_operational": 3, 00:08:28.879 "base_bdevs_list": [ 00:08:28.879 { 00:08:28.879 "name": null, 00:08:28.879 "uuid": "3cfcf330-c2a1-47c3-bf84-71855ececcdb", 00:08:28.879 "is_configured": false, 00:08:28.879 "data_offset": 0, 00:08:28.879 "data_size": 65536 00:08:28.879 }, 00:08:28.879 { 00:08:28.879 "name": null, 00:08:28.879 "uuid": "12d19111-c8c1-45a4-8305-48e663579a55", 00:08:28.879 "is_configured": false, 00:08:28.879 "data_offset": 0, 00:08:28.879 "data_size": 65536 00:08:28.879 }, 00:08:28.879 { 00:08:28.879 "name": "BaseBdev3", 00:08:28.879 "uuid": "2dc5f320-0eea-4a94-926c-305b8dd1db7b", 00:08:28.879 "is_configured": true, 00:08:28.879 "data_offset": 0, 00:08:28.879 "data_size": 65536 00:08:28.879 } 00:08:28.879 ] 00:08:28.879 }' 00:08:28.879 15:16:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:28.879 15:16:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.138 15:16:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:29.138 15:16:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:29.138 15:16:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.138 15:16:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.138 15:16:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.397 15:16:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:08:29.397 15:16:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:08:29.397 15:16:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.397 15:16:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.397 [2024-11-20 15:16:15.635407] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:29.397 15:16:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.397 15:16:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:29.397 15:16:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:29.397 15:16:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:29.397 15:16:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:29.397 15:16:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:29.397 15:16:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:29.397 15:16:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:29.397 15:16:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:29.397 15:16:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:29.397 15:16:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:29.397 15:16:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:29.397 15:16:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:29.397 15:16:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.397 15:16:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.397 15:16:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.397 15:16:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:29.397 "name": "Existed_Raid", 00:08:29.397 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:29.397 "strip_size_kb": 64, 00:08:29.397 "state": "configuring", 00:08:29.397 "raid_level": "raid0", 00:08:29.397 "superblock": false, 00:08:29.397 "num_base_bdevs": 3, 00:08:29.397 "num_base_bdevs_discovered": 2, 00:08:29.397 "num_base_bdevs_operational": 3, 00:08:29.397 "base_bdevs_list": [ 00:08:29.397 { 00:08:29.397 "name": null, 00:08:29.397 "uuid": "3cfcf330-c2a1-47c3-bf84-71855ececcdb", 00:08:29.397 "is_configured": false, 00:08:29.397 "data_offset": 0, 00:08:29.397 "data_size": 65536 00:08:29.397 }, 00:08:29.397 { 00:08:29.397 "name": "BaseBdev2", 00:08:29.397 "uuid": "12d19111-c8c1-45a4-8305-48e663579a55", 00:08:29.397 "is_configured": true, 00:08:29.397 "data_offset": 0, 00:08:29.397 "data_size": 65536 00:08:29.397 }, 00:08:29.397 { 00:08:29.397 "name": "BaseBdev3", 00:08:29.397 "uuid": "2dc5f320-0eea-4a94-926c-305b8dd1db7b", 00:08:29.397 "is_configured": true, 00:08:29.397 "data_offset": 0, 00:08:29.397 "data_size": 65536 00:08:29.397 } 00:08:29.397 ] 00:08:29.397 }' 00:08:29.397 15:16:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:29.397 15:16:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.656 15:16:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:29.656 15:16:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.656 15:16:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:29.656 15:16:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.656 15:16:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.656 15:16:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:08:29.656 15:16:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:29.656 15:16:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.656 15:16:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.656 15:16:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:08:29.656 15:16:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.915 15:16:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 3cfcf330-c2a1-47c3-bf84-71855ececcdb 00:08:29.915 15:16:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.915 15:16:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.915 [2024-11-20 15:16:16.206548] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:08:29.915 [2024-11-20 15:16:16.206605] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:08:29.915 [2024-11-20 15:16:16.206617] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:08:29.915 [2024-11-20 15:16:16.206923] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:29.915 [2024-11-20 15:16:16.207075] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:08:29.915 [2024-11-20 15:16:16.207091] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:08:29.915 [2024-11-20 15:16:16.207392] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:29.915 NewBaseBdev 00:08:29.915 15:16:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.915 15:16:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:08:29.915 15:16:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:08:29.915 15:16:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:29.915 15:16:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:29.915 15:16:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:29.916 15:16:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:29.916 15:16:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:29.916 15:16:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.916 15:16:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.916 15:16:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.916 15:16:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:08:29.916 15:16:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.916 15:16:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.916 [ 00:08:29.916 { 00:08:29.916 "name": "NewBaseBdev", 00:08:29.916 "aliases": [ 00:08:29.916 "3cfcf330-c2a1-47c3-bf84-71855ececcdb" 00:08:29.916 ], 00:08:29.916 "product_name": "Malloc disk", 00:08:29.916 "block_size": 512, 00:08:29.916 "num_blocks": 65536, 00:08:29.916 "uuid": "3cfcf330-c2a1-47c3-bf84-71855ececcdb", 00:08:29.916 "assigned_rate_limits": { 00:08:29.916 "rw_ios_per_sec": 0, 00:08:29.916 "rw_mbytes_per_sec": 0, 00:08:29.916 "r_mbytes_per_sec": 0, 00:08:29.916 "w_mbytes_per_sec": 0 00:08:29.916 }, 00:08:29.916 "claimed": true, 00:08:29.916 "claim_type": "exclusive_write", 00:08:29.916 "zoned": false, 00:08:29.916 "supported_io_types": { 00:08:29.916 "read": true, 00:08:29.916 "write": true, 00:08:29.916 "unmap": true, 00:08:29.916 "flush": true, 00:08:29.916 "reset": true, 00:08:29.916 "nvme_admin": false, 00:08:29.916 "nvme_io": false, 00:08:29.916 "nvme_io_md": false, 00:08:29.916 "write_zeroes": true, 00:08:29.916 "zcopy": true, 00:08:29.916 "get_zone_info": false, 00:08:29.916 "zone_management": false, 00:08:29.916 "zone_append": false, 00:08:29.916 "compare": false, 00:08:29.916 "compare_and_write": false, 00:08:29.916 "abort": true, 00:08:29.916 "seek_hole": false, 00:08:29.916 "seek_data": false, 00:08:29.916 "copy": true, 00:08:29.916 "nvme_iov_md": false 00:08:29.916 }, 00:08:29.916 "memory_domains": [ 00:08:29.916 { 00:08:29.916 "dma_device_id": "system", 00:08:29.916 "dma_device_type": 1 00:08:29.916 }, 00:08:29.916 { 00:08:29.916 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:29.916 "dma_device_type": 2 00:08:29.916 } 00:08:29.916 ], 00:08:29.916 "driver_specific": {} 00:08:29.916 } 00:08:29.916 ] 00:08:29.916 15:16:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.916 15:16:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:29.916 15:16:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:08:29.916 15:16:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:29.916 15:16:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:29.916 15:16:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:29.916 15:16:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:29.916 15:16:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:29.916 15:16:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:29.916 15:16:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:29.916 15:16:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:29.916 15:16:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:29.916 15:16:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:29.916 15:16:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.916 15:16:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.916 15:16:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:29.916 15:16:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.916 15:16:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:29.916 "name": "Existed_Raid", 00:08:29.916 "uuid": "13772137-5388-4d51-815b-26e482c77609", 00:08:29.916 "strip_size_kb": 64, 00:08:29.916 "state": "online", 00:08:29.916 "raid_level": "raid0", 00:08:29.916 "superblock": false, 00:08:29.916 "num_base_bdevs": 3, 00:08:29.916 "num_base_bdevs_discovered": 3, 00:08:29.916 "num_base_bdevs_operational": 3, 00:08:29.916 "base_bdevs_list": [ 00:08:29.916 { 00:08:29.916 "name": "NewBaseBdev", 00:08:29.916 "uuid": "3cfcf330-c2a1-47c3-bf84-71855ececcdb", 00:08:29.916 "is_configured": true, 00:08:29.916 "data_offset": 0, 00:08:29.916 "data_size": 65536 00:08:29.916 }, 00:08:29.916 { 00:08:29.916 "name": "BaseBdev2", 00:08:29.916 "uuid": "12d19111-c8c1-45a4-8305-48e663579a55", 00:08:29.916 "is_configured": true, 00:08:29.916 "data_offset": 0, 00:08:29.916 "data_size": 65536 00:08:29.916 }, 00:08:29.916 { 00:08:29.916 "name": "BaseBdev3", 00:08:29.916 "uuid": "2dc5f320-0eea-4a94-926c-305b8dd1db7b", 00:08:29.916 "is_configured": true, 00:08:29.916 "data_offset": 0, 00:08:29.916 "data_size": 65536 00:08:29.916 } 00:08:29.916 ] 00:08:29.916 }' 00:08:29.916 15:16:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:29.916 15:16:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.177 15:16:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:08:30.177 15:16:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:30.177 15:16:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:30.177 15:16:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:30.177 15:16:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:30.177 15:16:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:30.177 15:16:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:30.177 15:16:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:30.177 15:16:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.177 15:16:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.177 [2024-11-20 15:16:16.642301] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:30.436 15:16:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.436 15:16:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:30.436 "name": "Existed_Raid", 00:08:30.436 "aliases": [ 00:08:30.436 "13772137-5388-4d51-815b-26e482c77609" 00:08:30.436 ], 00:08:30.436 "product_name": "Raid Volume", 00:08:30.436 "block_size": 512, 00:08:30.436 "num_blocks": 196608, 00:08:30.436 "uuid": "13772137-5388-4d51-815b-26e482c77609", 00:08:30.436 "assigned_rate_limits": { 00:08:30.436 "rw_ios_per_sec": 0, 00:08:30.436 "rw_mbytes_per_sec": 0, 00:08:30.436 "r_mbytes_per_sec": 0, 00:08:30.436 "w_mbytes_per_sec": 0 00:08:30.436 }, 00:08:30.436 "claimed": false, 00:08:30.436 "zoned": false, 00:08:30.436 "supported_io_types": { 00:08:30.436 "read": true, 00:08:30.436 "write": true, 00:08:30.436 "unmap": true, 00:08:30.436 "flush": true, 00:08:30.436 "reset": true, 00:08:30.436 "nvme_admin": false, 00:08:30.436 "nvme_io": false, 00:08:30.436 "nvme_io_md": false, 00:08:30.436 "write_zeroes": true, 00:08:30.436 "zcopy": false, 00:08:30.436 "get_zone_info": false, 00:08:30.436 "zone_management": false, 00:08:30.436 "zone_append": false, 00:08:30.436 "compare": false, 00:08:30.436 "compare_and_write": false, 00:08:30.436 "abort": false, 00:08:30.436 "seek_hole": false, 00:08:30.436 "seek_data": false, 00:08:30.436 "copy": false, 00:08:30.436 "nvme_iov_md": false 00:08:30.436 }, 00:08:30.436 "memory_domains": [ 00:08:30.436 { 00:08:30.436 "dma_device_id": "system", 00:08:30.436 "dma_device_type": 1 00:08:30.436 }, 00:08:30.436 { 00:08:30.436 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:30.436 "dma_device_type": 2 00:08:30.436 }, 00:08:30.436 { 00:08:30.436 "dma_device_id": "system", 00:08:30.436 "dma_device_type": 1 00:08:30.436 }, 00:08:30.436 { 00:08:30.436 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:30.436 "dma_device_type": 2 00:08:30.436 }, 00:08:30.436 { 00:08:30.436 "dma_device_id": "system", 00:08:30.436 "dma_device_type": 1 00:08:30.436 }, 00:08:30.436 { 00:08:30.436 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:30.436 "dma_device_type": 2 00:08:30.436 } 00:08:30.436 ], 00:08:30.436 "driver_specific": { 00:08:30.436 "raid": { 00:08:30.436 "uuid": "13772137-5388-4d51-815b-26e482c77609", 00:08:30.436 "strip_size_kb": 64, 00:08:30.436 "state": "online", 00:08:30.436 "raid_level": "raid0", 00:08:30.436 "superblock": false, 00:08:30.436 "num_base_bdevs": 3, 00:08:30.436 "num_base_bdevs_discovered": 3, 00:08:30.436 "num_base_bdevs_operational": 3, 00:08:30.436 "base_bdevs_list": [ 00:08:30.436 { 00:08:30.436 "name": "NewBaseBdev", 00:08:30.436 "uuid": "3cfcf330-c2a1-47c3-bf84-71855ececcdb", 00:08:30.436 "is_configured": true, 00:08:30.436 "data_offset": 0, 00:08:30.436 "data_size": 65536 00:08:30.436 }, 00:08:30.436 { 00:08:30.436 "name": "BaseBdev2", 00:08:30.436 "uuid": "12d19111-c8c1-45a4-8305-48e663579a55", 00:08:30.436 "is_configured": true, 00:08:30.437 "data_offset": 0, 00:08:30.437 "data_size": 65536 00:08:30.437 }, 00:08:30.437 { 00:08:30.437 "name": "BaseBdev3", 00:08:30.437 "uuid": "2dc5f320-0eea-4a94-926c-305b8dd1db7b", 00:08:30.437 "is_configured": true, 00:08:30.437 "data_offset": 0, 00:08:30.437 "data_size": 65536 00:08:30.437 } 00:08:30.437 ] 00:08:30.437 } 00:08:30.437 } 00:08:30.437 }' 00:08:30.437 15:16:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:30.437 15:16:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:08:30.437 BaseBdev2 00:08:30.437 BaseBdev3' 00:08:30.437 15:16:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:30.437 15:16:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:30.437 15:16:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:30.437 15:16:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:08:30.437 15:16:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.437 15:16:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.437 15:16:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:30.437 15:16:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.437 15:16:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:30.437 15:16:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:30.437 15:16:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:30.437 15:16:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:30.437 15:16:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:30.437 15:16:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.437 15:16:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.437 15:16:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.437 15:16:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:30.437 15:16:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:30.437 15:16:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:30.437 15:16:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:30.437 15:16:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:30.437 15:16:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.437 15:16:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.437 15:16:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.437 15:16:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:30.695 15:16:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:30.695 15:16:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:30.695 15:16:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.695 15:16:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.695 [2024-11-20 15:16:16.925654] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:30.695 [2024-11-20 15:16:16.925828] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:30.695 [2024-11-20 15:16:16.925943] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:30.695 [2024-11-20 15:16:16.926000] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:30.695 [2024-11-20 15:16:16.926016] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:08:30.695 15:16:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.695 15:16:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 63688 00:08:30.695 15:16:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 63688 ']' 00:08:30.695 15:16:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 63688 00:08:30.695 15:16:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:08:30.695 15:16:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:30.695 15:16:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63688 00:08:30.695 killing process with pid 63688 00:08:30.695 15:16:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:30.695 15:16:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:30.695 15:16:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63688' 00:08:30.695 15:16:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 63688 00:08:30.695 [2024-11-20 15:16:16.969783] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:30.695 15:16:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 63688 00:08:30.954 [2024-11-20 15:16:17.285152] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:32.342 15:16:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:08:32.342 00:08:32.342 real 0m10.671s 00:08:32.342 user 0m17.011s 00:08:32.342 sys 0m2.003s 00:08:32.342 15:16:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:32.342 15:16:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.342 ************************************ 00:08:32.342 END TEST raid_state_function_test 00:08:32.342 ************************************ 00:08:32.342 15:16:18 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:08:32.342 15:16:18 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:32.342 15:16:18 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:32.342 15:16:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:32.342 ************************************ 00:08:32.342 START TEST raid_state_function_test_sb 00:08:32.342 ************************************ 00:08:32.342 15:16:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 3 true 00:08:32.342 15:16:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:08:32.342 15:16:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:08:32.342 15:16:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:08:32.342 15:16:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:32.342 15:16:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:32.342 15:16:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:32.342 15:16:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:32.342 15:16:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:32.342 15:16:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:32.342 15:16:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:32.342 15:16:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:32.342 15:16:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:32.342 15:16:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:32.342 15:16:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:32.342 15:16:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:32.342 15:16:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:32.342 15:16:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:32.342 15:16:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:32.342 15:16:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:32.342 15:16:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:32.342 15:16:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:32.342 15:16:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:08:32.342 15:16:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:32.342 15:16:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:32.342 15:16:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:08:32.342 15:16:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:08:32.342 15:16:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=64315 00:08:32.342 15:16:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:32.342 15:16:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 64315' 00:08:32.342 Process raid pid: 64315 00:08:32.342 15:16:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 64315 00:08:32.342 15:16:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 64315 ']' 00:08:32.342 15:16:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:32.343 15:16:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:32.343 15:16:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:32.343 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:32.343 15:16:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:32.343 15:16:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:32.343 [2024-11-20 15:16:18.606435] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:08:32.343 [2024-11-20 15:16:18.606568] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:32.343 [2024-11-20 15:16:18.787457] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:32.603 [2024-11-20 15:16:18.904539] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:32.862 [2024-11-20 15:16:19.111983] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:32.862 [2024-11-20 15:16:19.112043] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:33.120 15:16:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:33.120 15:16:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:08:33.120 15:16:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:33.120 15:16:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.120 15:16:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:33.120 [2024-11-20 15:16:19.459039] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:33.120 [2024-11-20 15:16:19.459276] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:33.120 [2024-11-20 15:16:19.459302] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:33.120 [2024-11-20 15:16:19.459318] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:33.120 [2024-11-20 15:16:19.459326] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:33.120 [2024-11-20 15:16:19.459339] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:33.120 15:16:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.120 15:16:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:33.120 15:16:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:33.120 15:16:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:33.120 15:16:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:33.120 15:16:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:33.120 15:16:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:33.120 15:16:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:33.120 15:16:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:33.120 15:16:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:33.120 15:16:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:33.120 15:16:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:33.120 15:16:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.120 15:16:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:33.120 15:16:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:33.120 15:16:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.120 15:16:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:33.120 "name": "Existed_Raid", 00:08:33.120 "uuid": "8301c79a-1763-4270-aff8-de185ce47594", 00:08:33.120 "strip_size_kb": 64, 00:08:33.120 "state": "configuring", 00:08:33.120 "raid_level": "raid0", 00:08:33.120 "superblock": true, 00:08:33.120 "num_base_bdevs": 3, 00:08:33.120 "num_base_bdevs_discovered": 0, 00:08:33.120 "num_base_bdevs_operational": 3, 00:08:33.120 "base_bdevs_list": [ 00:08:33.120 { 00:08:33.120 "name": "BaseBdev1", 00:08:33.120 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:33.120 "is_configured": false, 00:08:33.120 "data_offset": 0, 00:08:33.120 "data_size": 0 00:08:33.120 }, 00:08:33.120 { 00:08:33.120 "name": "BaseBdev2", 00:08:33.120 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:33.120 "is_configured": false, 00:08:33.120 "data_offset": 0, 00:08:33.120 "data_size": 0 00:08:33.120 }, 00:08:33.120 { 00:08:33.120 "name": "BaseBdev3", 00:08:33.120 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:33.120 "is_configured": false, 00:08:33.120 "data_offset": 0, 00:08:33.120 "data_size": 0 00:08:33.120 } 00:08:33.120 ] 00:08:33.120 }' 00:08:33.120 15:16:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:33.120 15:16:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:33.687 15:16:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:33.687 15:16:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.687 15:16:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:33.687 [2024-11-20 15:16:19.894384] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:33.687 [2024-11-20 15:16:19.894428] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:33.687 15:16:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.687 15:16:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:33.687 15:16:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.687 15:16:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:33.687 [2024-11-20 15:16:19.906384] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:33.687 [2024-11-20 15:16:19.906440] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:33.687 [2024-11-20 15:16:19.906451] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:33.687 [2024-11-20 15:16:19.906464] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:33.687 [2024-11-20 15:16:19.906472] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:33.687 [2024-11-20 15:16:19.906486] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:33.687 15:16:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.687 15:16:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:33.687 15:16:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.687 15:16:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:33.687 [2024-11-20 15:16:19.956014] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:33.687 BaseBdev1 00:08:33.687 15:16:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.687 15:16:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:33.687 15:16:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:33.687 15:16:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:33.687 15:16:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:33.687 15:16:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:33.687 15:16:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:33.687 15:16:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:33.687 15:16:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.687 15:16:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:33.687 15:16:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.687 15:16:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:33.687 15:16:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.687 15:16:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:33.687 [ 00:08:33.687 { 00:08:33.687 "name": "BaseBdev1", 00:08:33.687 "aliases": [ 00:08:33.687 "6a219c0a-f215-4dc8-bbd6-831c502825ab" 00:08:33.687 ], 00:08:33.687 "product_name": "Malloc disk", 00:08:33.687 "block_size": 512, 00:08:33.687 "num_blocks": 65536, 00:08:33.687 "uuid": "6a219c0a-f215-4dc8-bbd6-831c502825ab", 00:08:33.687 "assigned_rate_limits": { 00:08:33.687 "rw_ios_per_sec": 0, 00:08:33.687 "rw_mbytes_per_sec": 0, 00:08:33.687 "r_mbytes_per_sec": 0, 00:08:33.687 "w_mbytes_per_sec": 0 00:08:33.687 }, 00:08:33.687 "claimed": true, 00:08:33.687 "claim_type": "exclusive_write", 00:08:33.687 "zoned": false, 00:08:33.687 "supported_io_types": { 00:08:33.687 "read": true, 00:08:33.687 "write": true, 00:08:33.687 "unmap": true, 00:08:33.687 "flush": true, 00:08:33.687 "reset": true, 00:08:33.687 "nvme_admin": false, 00:08:33.687 "nvme_io": false, 00:08:33.687 "nvme_io_md": false, 00:08:33.687 "write_zeroes": true, 00:08:33.687 "zcopy": true, 00:08:33.687 "get_zone_info": false, 00:08:33.687 "zone_management": false, 00:08:33.687 "zone_append": false, 00:08:33.687 "compare": false, 00:08:33.687 "compare_and_write": false, 00:08:33.687 "abort": true, 00:08:33.687 "seek_hole": false, 00:08:33.687 "seek_data": false, 00:08:33.687 "copy": true, 00:08:33.687 "nvme_iov_md": false 00:08:33.687 }, 00:08:33.687 "memory_domains": [ 00:08:33.687 { 00:08:33.687 "dma_device_id": "system", 00:08:33.687 "dma_device_type": 1 00:08:33.687 }, 00:08:33.687 { 00:08:33.687 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:33.687 "dma_device_type": 2 00:08:33.687 } 00:08:33.687 ], 00:08:33.687 "driver_specific": {} 00:08:33.687 } 00:08:33.687 ] 00:08:33.687 15:16:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.687 15:16:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:33.687 15:16:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:33.687 15:16:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:33.687 15:16:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:33.687 15:16:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:33.687 15:16:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:33.688 15:16:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:33.688 15:16:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:33.688 15:16:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:33.688 15:16:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:33.688 15:16:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:33.688 15:16:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:33.688 15:16:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.688 15:16:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:33.688 15:16:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:33.688 15:16:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.688 15:16:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:33.688 "name": "Existed_Raid", 00:08:33.688 "uuid": "4188e440-bd54-42ab-a7ae-3d45a62d866c", 00:08:33.688 "strip_size_kb": 64, 00:08:33.688 "state": "configuring", 00:08:33.688 "raid_level": "raid0", 00:08:33.688 "superblock": true, 00:08:33.688 "num_base_bdevs": 3, 00:08:33.688 "num_base_bdevs_discovered": 1, 00:08:33.688 "num_base_bdevs_operational": 3, 00:08:33.688 "base_bdevs_list": [ 00:08:33.688 { 00:08:33.688 "name": "BaseBdev1", 00:08:33.688 "uuid": "6a219c0a-f215-4dc8-bbd6-831c502825ab", 00:08:33.688 "is_configured": true, 00:08:33.688 "data_offset": 2048, 00:08:33.688 "data_size": 63488 00:08:33.688 }, 00:08:33.688 { 00:08:33.688 "name": "BaseBdev2", 00:08:33.688 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:33.688 "is_configured": false, 00:08:33.688 "data_offset": 0, 00:08:33.688 "data_size": 0 00:08:33.688 }, 00:08:33.688 { 00:08:33.688 "name": "BaseBdev3", 00:08:33.688 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:33.688 "is_configured": false, 00:08:33.688 "data_offset": 0, 00:08:33.688 "data_size": 0 00:08:33.688 } 00:08:33.688 ] 00:08:33.688 }' 00:08:33.688 15:16:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:33.688 15:16:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:33.946 15:16:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:33.946 15:16:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.946 15:16:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:33.946 [2024-11-20 15:16:20.355555] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:33.946 [2024-11-20 15:16:20.355615] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:33.946 15:16:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.946 15:16:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:33.946 15:16:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.946 15:16:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:33.946 [2024-11-20 15:16:20.367627] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:33.946 [2024-11-20 15:16:20.369892] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:33.946 [2024-11-20 15:16:20.369943] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:33.946 [2024-11-20 15:16:20.369955] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:33.946 [2024-11-20 15:16:20.369967] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:33.946 15:16:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.947 15:16:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:33.947 15:16:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:33.947 15:16:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:33.947 15:16:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:33.947 15:16:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:33.947 15:16:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:33.947 15:16:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:33.947 15:16:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:33.947 15:16:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:33.947 15:16:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:33.947 15:16:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:33.947 15:16:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:33.947 15:16:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:33.947 15:16:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.947 15:16:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:33.947 15:16:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:33.947 15:16:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.947 15:16:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:33.947 "name": "Existed_Raid", 00:08:33.947 "uuid": "f76fe650-58e6-4783-92ab-8ce3eb73ba70", 00:08:33.947 "strip_size_kb": 64, 00:08:33.947 "state": "configuring", 00:08:33.947 "raid_level": "raid0", 00:08:33.947 "superblock": true, 00:08:33.947 "num_base_bdevs": 3, 00:08:33.947 "num_base_bdevs_discovered": 1, 00:08:33.947 "num_base_bdevs_operational": 3, 00:08:33.947 "base_bdevs_list": [ 00:08:33.947 { 00:08:33.947 "name": "BaseBdev1", 00:08:33.947 "uuid": "6a219c0a-f215-4dc8-bbd6-831c502825ab", 00:08:33.947 "is_configured": true, 00:08:33.947 "data_offset": 2048, 00:08:33.947 "data_size": 63488 00:08:33.947 }, 00:08:33.947 { 00:08:33.947 "name": "BaseBdev2", 00:08:33.947 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:33.947 "is_configured": false, 00:08:33.947 "data_offset": 0, 00:08:33.947 "data_size": 0 00:08:33.947 }, 00:08:33.947 { 00:08:33.947 "name": "BaseBdev3", 00:08:33.947 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:33.947 "is_configured": false, 00:08:33.947 "data_offset": 0, 00:08:33.947 "data_size": 0 00:08:33.947 } 00:08:33.947 ] 00:08:33.947 }' 00:08:33.947 15:16:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:33.947 15:16:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:34.514 15:16:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:34.514 15:16:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.514 15:16:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:34.514 [2024-11-20 15:16:20.846581] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:34.514 BaseBdev2 00:08:34.514 15:16:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.514 15:16:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:34.514 15:16:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:34.514 15:16:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:34.514 15:16:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:34.514 15:16:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:34.514 15:16:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:34.514 15:16:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:34.514 15:16:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.514 15:16:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:34.514 15:16:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.514 15:16:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:34.514 15:16:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.514 15:16:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:34.514 [ 00:08:34.514 { 00:08:34.514 "name": "BaseBdev2", 00:08:34.514 "aliases": [ 00:08:34.514 "936dc22c-16ec-465c-8dea-f815acfa62e9" 00:08:34.514 ], 00:08:34.514 "product_name": "Malloc disk", 00:08:34.514 "block_size": 512, 00:08:34.514 "num_blocks": 65536, 00:08:34.514 "uuid": "936dc22c-16ec-465c-8dea-f815acfa62e9", 00:08:34.514 "assigned_rate_limits": { 00:08:34.514 "rw_ios_per_sec": 0, 00:08:34.514 "rw_mbytes_per_sec": 0, 00:08:34.514 "r_mbytes_per_sec": 0, 00:08:34.514 "w_mbytes_per_sec": 0 00:08:34.514 }, 00:08:34.514 "claimed": true, 00:08:34.514 "claim_type": "exclusive_write", 00:08:34.514 "zoned": false, 00:08:34.514 "supported_io_types": { 00:08:34.514 "read": true, 00:08:34.514 "write": true, 00:08:34.514 "unmap": true, 00:08:34.514 "flush": true, 00:08:34.514 "reset": true, 00:08:34.514 "nvme_admin": false, 00:08:34.514 "nvme_io": false, 00:08:34.514 "nvme_io_md": false, 00:08:34.514 "write_zeroes": true, 00:08:34.514 "zcopy": true, 00:08:34.514 "get_zone_info": false, 00:08:34.514 "zone_management": false, 00:08:34.514 "zone_append": false, 00:08:34.514 "compare": false, 00:08:34.514 "compare_and_write": false, 00:08:34.514 "abort": true, 00:08:34.514 "seek_hole": false, 00:08:34.514 "seek_data": false, 00:08:34.514 "copy": true, 00:08:34.514 "nvme_iov_md": false 00:08:34.514 }, 00:08:34.514 "memory_domains": [ 00:08:34.514 { 00:08:34.514 "dma_device_id": "system", 00:08:34.514 "dma_device_type": 1 00:08:34.514 }, 00:08:34.514 { 00:08:34.514 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:34.514 "dma_device_type": 2 00:08:34.514 } 00:08:34.514 ], 00:08:34.514 "driver_specific": {} 00:08:34.514 } 00:08:34.514 ] 00:08:34.514 15:16:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.514 15:16:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:34.514 15:16:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:34.514 15:16:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:34.514 15:16:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:34.514 15:16:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:34.514 15:16:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:34.514 15:16:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:34.514 15:16:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:34.514 15:16:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:34.514 15:16:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:34.514 15:16:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:34.514 15:16:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:34.514 15:16:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:34.515 15:16:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:34.515 15:16:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:34.515 15:16:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.515 15:16:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:34.515 15:16:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.515 15:16:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:34.515 "name": "Existed_Raid", 00:08:34.515 "uuid": "f76fe650-58e6-4783-92ab-8ce3eb73ba70", 00:08:34.515 "strip_size_kb": 64, 00:08:34.515 "state": "configuring", 00:08:34.515 "raid_level": "raid0", 00:08:34.515 "superblock": true, 00:08:34.515 "num_base_bdevs": 3, 00:08:34.515 "num_base_bdevs_discovered": 2, 00:08:34.515 "num_base_bdevs_operational": 3, 00:08:34.515 "base_bdevs_list": [ 00:08:34.515 { 00:08:34.515 "name": "BaseBdev1", 00:08:34.515 "uuid": "6a219c0a-f215-4dc8-bbd6-831c502825ab", 00:08:34.515 "is_configured": true, 00:08:34.515 "data_offset": 2048, 00:08:34.515 "data_size": 63488 00:08:34.515 }, 00:08:34.515 { 00:08:34.515 "name": "BaseBdev2", 00:08:34.515 "uuid": "936dc22c-16ec-465c-8dea-f815acfa62e9", 00:08:34.515 "is_configured": true, 00:08:34.515 "data_offset": 2048, 00:08:34.515 "data_size": 63488 00:08:34.515 }, 00:08:34.515 { 00:08:34.515 "name": "BaseBdev3", 00:08:34.515 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:34.515 "is_configured": false, 00:08:34.515 "data_offset": 0, 00:08:34.515 "data_size": 0 00:08:34.515 } 00:08:34.515 ] 00:08:34.515 }' 00:08:34.515 15:16:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:34.515 15:16:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.080 15:16:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:35.080 15:16:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.080 15:16:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.080 [2024-11-20 15:16:21.368065] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:35.080 [2024-11-20 15:16:21.368350] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:35.080 [2024-11-20 15:16:21.368390] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:35.080 [2024-11-20 15:16:21.368696] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:35.080 [2024-11-20 15:16:21.368868] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:35.080 [2024-11-20 15:16:21.368879] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:35.080 BaseBdev3 00:08:35.080 [2024-11-20 15:16:21.369034] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:35.080 15:16:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.080 15:16:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:08:35.080 15:16:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:35.080 15:16:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:35.080 15:16:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:35.080 15:16:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:35.080 15:16:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:35.080 15:16:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:35.080 15:16:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.080 15:16:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.081 15:16:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.081 15:16:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:35.081 15:16:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.081 15:16:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.081 [ 00:08:35.081 { 00:08:35.081 "name": "BaseBdev3", 00:08:35.081 "aliases": [ 00:08:35.081 "72e399d1-a4ec-4eb9-8861-2c9460cebf6d" 00:08:35.081 ], 00:08:35.081 "product_name": "Malloc disk", 00:08:35.081 "block_size": 512, 00:08:35.081 "num_blocks": 65536, 00:08:35.081 "uuid": "72e399d1-a4ec-4eb9-8861-2c9460cebf6d", 00:08:35.081 "assigned_rate_limits": { 00:08:35.081 "rw_ios_per_sec": 0, 00:08:35.081 "rw_mbytes_per_sec": 0, 00:08:35.081 "r_mbytes_per_sec": 0, 00:08:35.081 "w_mbytes_per_sec": 0 00:08:35.081 }, 00:08:35.081 "claimed": true, 00:08:35.081 "claim_type": "exclusive_write", 00:08:35.081 "zoned": false, 00:08:35.081 "supported_io_types": { 00:08:35.081 "read": true, 00:08:35.081 "write": true, 00:08:35.081 "unmap": true, 00:08:35.081 "flush": true, 00:08:35.081 "reset": true, 00:08:35.081 "nvme_admin": false, 00:08:35.081 "nvme_io": false, 00:08:35.081 "nvme_io_md": false, 00:08:35.081 "write_zeroes": true, 00:08:35.081 "zcopy": true, 00:08:35.081 "get_zone_info": false, 00:08:35.081 "zone_management": false, 00:08:35.081 "zone_append": false, 00:08:35.081 "compare": false, 00:08:35.081 "compare_and_write": false, 00:08:35.081 "abort": true, 00:08:35.081 "seek_hole": false, 00:08:35.081 "seek_data": false, 00:08:35.081 "copy": true, 00:08:35.081 "nvme_iov_md": false 00:08:35.081 }, 00:08:35.081 "memory_domains": [ 00:08:35.081 { 00:08:35.081 "dma_device_id": "system", 00:08:35.081 "dma_device_type": 1 00:08:35.081 }, 00:08:35.081 { 00:08:35.081 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:35.081 "dma_device_type": 2 00:08:35.081 } 00:08:35.081 ], 00:08:35.081 "driver_specific": {} 00:08:35.081 } 00:08:35.081 ] 00:08:35.081 15:16:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.081 15:16:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:35.081 15:16:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:35.081 15:16:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:35.081 15:16:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:08:35.081 15:16:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:35.081 15:16:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:35.081 15:16:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:35.081 15:16:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:35.081 15:16:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:35.081 15:16:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:35.081 15:16:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:35.081 15:16:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:35.081 15:16:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:35.081 15:16:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:35.081 15:16:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:35.081 15:16:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.081 15:16:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.081 15:16:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.081 15:16:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:35.081 "name": "Existed_Raid", 00:08:35.081 "uuid": "f76fe650-58e6-4783-92ab-8ce3eb73ba70", 00:08:35.081 "strip_size_kb": 64, 00:08:35.081 "state": "online", 00:08:35.081 "raid_level": "raid0", 00:08:35.081 "superblock": true, 00:08:35.081 "num_base_bdevs": 3, 00:08:35.081 "num_base_bdevs_discovered": 3, 00:08:35.081 "num_base_bdevs_operational": 3, 00:08:35.081 "base_bdevs_list": [ 00:08:35.081 { 00:08:35.081 "name": "BaseBdev1", 00:08:35.081 "uuid": "6a219c0a-f215-4dc8-bbd6-831c502825ab", 00:08:35.081 "is_configured": true, 00:08:35.081 "data_offset": 2048, 00:08:35.081 "data_size": 63488 00:08:35.081 }, 00:08:35.081 { 00:08:35.081 "name": "BaseBdev2", 00:08:35.081 "uuid": "936dc22c-16ec-465c-8dea-f815acfa62e9", 00:08:35.081 "is_configured": true, 00:08:35.081 "data_offset": 2048, 00:08:35.081 "data_size": 63488 00:08:35.081 }, 00:08:35.081 { 00:08:35.081 "name": "BaseBdev3", 00:08:35.081 "uuid": "72e399d1-a4ec-4eb9-8861-2c9460cebf6d", 00:08:35.081 "is_configured": true, 00:08:35.081 "data_offset": 2048, 00:08:35.081 "data_size": 63488 00:08:35.081 } 00:08:35.081 ] 00:08:35.081 }' 00:08:35.081 15:16:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:35.081 15:16:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.340 15:16:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:35.340 15:16:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:35.340 15:16:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:35.340 15:16:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:35.340 15:16:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:35.340 15:16:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:35.340 15:16:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:35.340 15:16:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.340 15:16:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.340 15:16:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:35.340 [2024-11-20 15:16:21.815990] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:35.599 15:16:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.599 15:16:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:35.599 "name": "Existed_Raid", 00:08:35.599 "aliases": [ 00:08:35.599 "f76fe650-58e6-4783-92ab-8ce3eb73ba70" 00:08:35.599 ], 00:08:35.599 "product_name": "Raid Volume", 00:08:35.599 "block_size": 512, 00:08:35.599 "num_blocks": 190464, 00:08:35.599 "uuid": "f76fe650-58e6-4783-92ab-8ce3eb73ba70", 00:08:35.599 "assigned_rate_limits": { 00:08:35.599 "rw_ios_per_sec": 0, 00:08:35.599 "rw_mbytes_per_sec": 0, 00:08:35.599 "r_mbytes_per_sec": 0, 00:08:35.599 "w_mbytes_per_sec": 0 00:08:35.599 }, 00:08:35.599 "claimed": false, 00:08:35.599 "zoned": false, 00:08:35.599 "supported_io_types": { 00:08:35.599 "read": true, 00:08:35.599 "write": true, 00:08:35.599 "unmap": true, 00:08:35.599 "flush": true, 00:08:35.599 "reset": true, 00:08:35.599 "nvme_admin": false, 00:08:35.599 "nvme_io": false, 00:08:35.599 "nvme_io_md": false, 00:08:35.599 "write_zeroes": true, 00:08:35.599 "zcopy": false, 00:08:35.599 "get_zone_info": false, 00:08:35.599 "zone_management": false, 00:08:35.599 "zone_append": false, 00:08:35.599 "compare": false, 00:08:35.599 "compare_and_write": false, 00:08:35.599 "abort": false, 00:08:35.599 "seek_hole": false, 00:08:35.599 "seek_data": false, 00:08:35.599 "copy": false, 00:08:35.599 "nvme_iov_md": false 00:08:35.599 }, 00:08:35.599 "memory_domains": [ 00:08:35.599 { 00:08:35.599 "dma_device_id": "system", 00:08:35.599 "dma_device_type": 1 00:08:35.599 }, 00:08:35.599 { 00:08:35.599 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:35.599 "dma_device_type": 2 00:08:35.599 }, 00:08:35.599 { 00:08:35.599 "dma_device_id": "system", 00:08:35.599 "dma_device_type": 1 00:08:35.599 }, 00:08:35.599 { 00:08:35.599 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:35.599 "dma_device_type": 2 00:08:35.599 }, 00:08:35.599 { 00:08:35.599 "dma_device_id": "system", 00:08:35.599 "dma_device_type": 1 00:08:35.599 }, 00:08:35.599 { 00:08:35.599 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:35.599 "dma_device_type": 2 00:08:35.599 } 00:08:35.599 ], 00:08:35.599 "driver_specific": { 00:08:35.599 "raid": { 00:08:35.599 "uuid": "f76fe650-58e6-4783-92ab-8ce3eb73ba70", 00:08:35.599 "strip_size_kb": 64, 00:08:35.599 "state": "online", 00:08:35.599 "raid_level": "raid0", 00:08:35.599 "superblock": true, 00:08:35.599 "num_base_bdevs": 3, 00:08:35.599 "num_base_bdevs_discovered": 3, 00:08:35.599 "num_base_bdevs_operational": 3, 00:08:35.599 "base_bdevs_list": [ 00:08:35.599 { 00:08:35.599 "name": "BaseBdev1", 00:08:35.599 "uuid": "6a219c0a-f215-4dc8-bbd6-831c502825ab", 00:08:35.599 "is_configured": true, 00:08:35.599 "data_offset": 2048, 00:08:35.599 "data_size": 63488 00:08:35.599 }, 00:08:35.599 { 00:08:35.599 "name": "BaseBdev2", 00:08:35.599 "uuid": "936dc22c-16ec-465c-8dea-f815acfa62e9", 00:08:35.599 "is_configured": true, 00:08:35.599 "data_offset": 2048, 00:08:35.599 "data_size": 63488 00:08:35.599 }, 00:08:35.599 { 00:08:35.599 "name": "BaseBdev3", 00:08:35.599 "uuid": "72e399d1-a4ec-4eb9-8861-2c9460cebf6d", 00:08:35.599 "is_configured": true, 00:08:35.599 "data_offset": 2048, 00:08:35.599 "data_size": 63488 00:08:35.599 } 00:08:35.599 ] 00:08:35.599 } 00:08:35.599 } 00:08:35.599 }' 00:08:35.599 15:16:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:35.599 15:16:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:35.599 BaseBdev2 00:08:35.599 BaseBdev3' 00:08:35.599 15:16:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:35.599 15:16:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:35.599 15:16:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:35.599 15:16:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:35.599 15:16:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:35.599 15:16:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.599 15:16:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.599 15:16:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.599 15:16:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:35.599 15:16:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:35.599 15:16:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:35.599 15:16:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:35.599 15:16:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:35.599 15:16:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.599 15:16:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.599 15:16:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.599 15:16:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:35.599 15:16:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:35.599 15:16:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:35.599 15:16:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:35.599 15:16:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:35.599 15:16:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.599 15:16:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.599 15:16:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.599 15:16:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:35.599 15:16:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:35.600 15:16:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:35.600 15:16:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.600 15:16:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.858 [2024-11-20 15:16:22.083407] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:35.858 [2024-11-20 15:16:22.083441] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:35.858 [2024-11-20 15:16:22.083517] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:35.858 15:16:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.858 15:16:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:35.858 15:16:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:08:35.858 15:16:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:35.858 15:16:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:08:35.858 15:16:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:35.858 15:16:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:08:35.858 15:16:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:35.858 15:16:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:35.858 15:16:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:35.858 15:16:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:35.858 15:16:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:35.858 15:16:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:35.858 15:16:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:35.858 15:16:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:35.858 15:16:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:35.858 15:16:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:35.858 15:16:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:35.859 15:16:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.859 15:16:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.859 15:16:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.859 15:16:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:35.859 "name": "Existed_Raid", 00:08:35.859 "uuid": "f76fe650-58e6-4783-92ab-8ce3eb73ba70", 00:08:35.859 "strip_size_kb": 64, 00:08:35.859 "state": "offline", 00:08:35.859 "raid_level": "raid0", 00:08:35.859 "superblock": true, 00:08:35.859 "num_base_bdevs": 3, 00:08:35.859 "num_base_bdevs_discovered": 2, 00:08:35.859 "num_base_bdevs_operational": 2, 00:08:35.859 "base_bdevs_list": [ 00:08:35.859 { 00:08:35.859 "name": null, 00:08:35.859 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:35.859 "is_configured": false, 00:08:35.859 "data_offset": 0, 00:08:35.859 "data_size": 63488 00:08:35.859 }, 00:08:35.859 { 00:08:35.859 "name": "BaseBdev2", 00:08:35.859 "uuid": "936dc22c-16ec-465c-8dea-f815acfa62e9", 00:08:35.859 "is_configured": true, 00:08:35.859 "data_offset": 2048, 00:08:35.859 "data_size": 63488 00:08:35.859 }, 00:08:35.859 { 00:08:35.859 "name": "BaseBdev3", 00:08:35.859 "uuid": "72e399d1-a4ec-4eb9-8861-2c9460cebf6d", 00:08:35.859 "is_configured": true, 00:08:35.859 "data_offset": 2048, 00:08:35.859 "data_size": 63488 00:08:35.859 } 00:08:35.859 ] 00:08:35.859 }' 00:08:35.859 15:16:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:35.859 15:16:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:36.513 15:16:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:36.513 15:16:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:36.513 15:16:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:36.513 15:16:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:36.513 15:16:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.513 15:16:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:36.513 15:16:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.513 15:16:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:36.513 15:16:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:36.513 15:16:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:36.513 15:16:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.513 15:16:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:36.513 [2024-11-20 15:16:22.684847] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:36.513 15:16:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.513 15:16:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:36.513 15:16:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:36.513 15:16:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:36.513 15:16:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.513 15:16:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:36.513 15:16:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:36.513 15:16:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.513 15:16:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:36.513 15:16:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:36.513 15:16:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:08:36.513 15:16:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.513 15:16:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:36.513 [2024-11-20 15:16:22.839004] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:36.513 [2024-11-20 15:16:22.839060] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:36.513 15:16:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.513 15:16:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:36.513 15:16:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:36.513 15:16:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:36.513 15:16:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.513 15:16:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:36.513 15:16:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:36.513 15:16:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.772 15:16:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:36.772 15:16:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:36.772 15:16:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:08:36.772 15:16:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:08:36.772 15:16:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:36.772 15:16:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:36.772 15:16:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.772 15:16:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:36.772 BaseBdev2 00:08:36.772 15:16:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.772 15:16:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:08:36.772 15:16:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:36.772 15:16:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:36.772 15:16:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:36.772 15:16:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:36.772 15:16:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:36.772 15:16:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:36.772 15:16:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.772 15:16:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:36.772 15:16:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.772 15:16:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:36.772 15:16:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.772 15:16:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:36.772 [ 00:08:36.772 { 00:08:36.772 "name": "BaseBdev2", 00:08:36.772 "aliases": [ 00:08:36.772 "e7989db2-c3bb-44e6-87c4-68cda62e8f24" 00:08:36.772 ], 00:08:36.772 "product_name": "Malloc disk", 00:08:36.772 "block_size": 512, 00:08:36.772 "num_blocks": 65536, 00:08:36.772 "uuid": "e7989db2-c3bb-44e6-87c4-68cda62e8f24", 00:08:36.772 "assigned_rate_limits": { 00:08:36.773 "rw_ios_per_sec": 0, 00:08:36.773 "rw_mbytes_per_sec": 0, 00:08:36.773 "r_mbytes_per_sec": 0, 00:08:36.773 "w_mbytes_per_sec": 0 00:08:36.773 }, 00:08:36.773 "claimed": false, 00:08:36.773 "zoned": false, 00:08:36.773 "supported_io_types": { 00:08:36.773 "read": true, 00:08:36.773 "write": true, 00:08:36.773 "unmap": true, 00:08:36.773 "flush": true, 00:08:36.773 "reset": true, 00:08:36.773 "nvme_admin": false, 00:08:36.773 "nvme_io": false, 00:08:36.773 "nvme_io_md": false, 00:08:36.773 "write_zeroes": true, 00:08:36.773 "zcopy": true, 00:08:36.773 "get_zone_info": false, 00:08:36.773 "zone_management": false, 00:08:36.773 "zone_append": false, 00:08:36.773 "compare": false, 00:08:36.773 "compare_and_write": false, 00:08:36.773 "abort": true, 00:08:36.773 "seek_hole": false, 00:08:36.773 "seek_data": false, 00:08:36.773 "copy": true, 00:08:36.773 "nvme_iov_md": false 00:08:36.773 }, 00:08:36.773 "memory_domains": [ 00:08:36.773 { 00:08:36.773 "dma_device_id": "system", 00:08:36.773 "dma_device_type": 1 00:08:36.773 }, 00:08:36.773 { 00:08:36.773 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:36.773 "dma_device_type": 2 00:08:36.773 } 00:08:36.773 ], 00:08:36.773 "driver_specific": {} 00:08:36.773 } 00:08:36.773 ] 00:08:36.773 15:16:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.773 15:16:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:36.773 15:16:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:36.773 15:16:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:36.773 15:16:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:36.773 15:16:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.773 15:16:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:36.773 BaseBdev3 00:08:36.773 15:16:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.773 15:16:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:08:36.773 15:16:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:36.773 15:16:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:36.773 15:16:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:36.773 15:16:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:36.773 15:16:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:36.773 15:16:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:36.773 15:16:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.773 15:16:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:36.773 15:16:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.773 15:16:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:36.773 15:16:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.773 15:16:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:36.773 [ 00:08:36.773 { 00:08:36.773 "name": "BaseBdev3", 00:08:36.773 "aliases": [ 00:08:36.773 "c8d2b610-d395-4686-8f1d-7479f2931497" 00:08:36.773 ], 00:08:36.773 "product_name": "Malloc disk", 00:08:36.773 "block_size": 512, 00:08:36.773 "num_blocks": 65536, 00:08:36.773 "uuid": "c8d2b610-d395-4686-8f1d-7479f2931497", 00:08:36.773 "assigned_rate_limits": { 00:08:36.773 "rw_ios_per_sec": 0, 00:08:36.773 "rw_mbytes_per_sec": 0, 00:08:36.773 "r_mbytes_per_sec": 0, 00:08:36.773 "w_mbytes_per_sec": 0 00:08:36.773 }, 00:08:36.773 "claimed": false, 00:08:36.773 "zoned": false, 00:08:36.773 "supported_io_types": { 00:08:36.773 "read": true, 00:08:36.773 "write": true, 00:08:36.773 "unmap": true, 00:08:36.773 "flush": true, 00:08:36.773 "reset": true, 00:08:36.773 "nvme_admin": false, 00:08:36.773 "nvme_io": false, 00:08:36.773 "nvme_io_md": false, 00:08:36.773 "write_zeroes": true, 00:08:36.773 "zcopy": true, 00:08:36.773 "get_zone_info": false, 00:08:36.773 "zone_management": false, 00:08:36.773 "zone_append": false, 00:08:36.773 "compare": false, 00:08:36.773 "compare_and_write": false, 00:08:36.773 "abort": true, 00:08:36.773 "seek_hole": false, 00:08:36.773 "seek_data": false, 00:08:36.773 "copy": true, 00:08:36.773 "nvme_iov_md": false 00:08:36.773 }, 00:08:36.773 "memory_domains": [ 00:08:36.773 { 00:08:36.773 "dma_device_id": "system", 00:08:36.773 "dma_device_type": 1 00:08:36.773 }, 00:08:36.773 { 00:08:36.773 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:36.773 "dma_device_type": 2 00:08:36.773 } 00:08:36.773 ], 00:08:36.773 "driver_specific": {} 00:08:36.773 } 00:08:36.773 ] 00:08:36.773 15:16:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.773 15:16:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:36.773 15:16:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:36.773 15:16:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:36.773 15:16:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:36.773 15:16:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.773 15:16:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:36.773 [2024-11-20 15:16:23.163066] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:36.774 [2024-11-20 15:16:23.163118] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:36.774 [2024-11-20 15:16:23.163148] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:36.774 [2024-11-20 15:16:23.165445] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:36.774 15:16:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.774 15:16:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:36.774 15:16:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:36.774 15:16:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:36.774 15:16:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:36.774 15:16:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:36.774 15:16:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:36.774 15:16:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:36.774 15:16:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:36.774 15:16:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:36.774 15:16:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:36.774 15:16:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:36.774 15:16:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.774 15:16:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:36.774 15:16:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:36.774 15:16:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.774 15:16:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:36.774 "name": "Existed_Raid", 00:08:36.774 "uuid": "7e4d1992-60cc-4253-baf6-75d313379405", 00:08:36.774 "strip_size_kb": 64, 00:08:36.774 "state": "configuring", 00:08:36.774 "raid_level": "raid0", 00:08:36.774 "superblock": true, 00:08:36.774 "num_base_bdevs": 3, 00:08:36.774 "num_base_bdevs_discovered": 2, 00:08:36.774 "num_base_bdevs_operational": 3, 00:08:36.774 "base_bdevs_list": [ 00:08:36.774 { 00:08:36.774 "name": "BaseBdev1", 00:08:36.774 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:36.774 "is_configured": false, 00:08:36.774 "data_offset": 0, 00:08:36.774 "data_size": 0 00:08:36.774 }, 00:08:36.774 { 00:08:36.774 "name": "BaseBdev2", 00:08:36.774 "uuid": "e7989db2-c3bb-44e6-87c4-68cda62e8f24", 00:08:36.774 "is_configured": true, 00:08:36.774 "data_offset": 2048, 00:08:36.774 "data_size": 63488 00:08:36.774 }, 00:08:36.774 { 00:08:36.774 "name": "BaseBdev3", 00:08:36.774 "uuid": "c8d2b610-d395-4686-8f1d-7479f2931497", 00:08:36.774 "is_configured": true, 00:08:36.774 "data_offset": 2048, 00:08:36.774 "data_size": 63488 00:08:36.774 } 00:08:36.774 ] 00:08:36.774 }' 00:08:36.774 15:16:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:36.774 15:16:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.343 15:16:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:08:37.343 15:16:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.343 15:16:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.343 [2024-11-20 15:16:23.626416] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:37.343 15:16:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.343 15:16:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:37.343 15:16:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:37.343 15:16:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:37.343 15:16:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:37.343 15:16:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:37.343 15:16:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:37.343 15:16:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:37.343 15:16:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:37.343 15:16:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:37.343 15:16:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:37.343 15:16:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:37.343 15:16:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.343 15:16:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.343 15:16:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:37.343 15:16:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.343 15:16:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:37.343 "name": "Existed_Raid", 00:08:37.343 "uuid": "7e4d1992-60cc-4253-baf6-75d313379405", 00:08:37.343 "strip_size_kb": 64, 00:08:37.343 "state": "configuring", 00:08:37.343 "raid_level": "raid0", 00:08:37.343 "superblock": true, 00:08:37.343 "num_base_bdevs": 3, 00:08:37.343 "num_base_bdevs_discovered": 1, 00:08:37.343 "num_base_bdevs_operational": 3, 00:08:37.343 "base_bdevs_list": [ 00:08:37.343 { 00:08:37.343 "name": "BaseBdev1", 00:08:37.343 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:37.343 "is_configured": false, 00:08:37.343 "data_offset": 0, 00:08:37.343 "data_size": 0 00:08:37.343 }, 00:08:37.343 { 00:08:37.343 "name": null, 00:08:37.343 "uuid": "e7989db2-c3bb-44e6-87c4-68cda62e8f24", 00:08:37.343 "is_configured": false, 00:08:37.343 "data_offset": 0, 00:08:37.343 "data_size": 63488 00:08:37.343 }, 00:08:37.343 { 00:08:37.343 "name": "BaseBdev3", 00:08:37.343 "uuid": "c8d2b610-d395-4686-8f1d-7479f2931497", 00:08:37.343 "is_configured": true, 00:08:37.343 "data_offset": 2048, 00:08:37.343 "data_size": 63488 00:08:37.343 } 00:08:37.343 ] 00:08:37.343 }' 00:08:37.343 15:16:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:37.343 15:16:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.604 15:16:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:37.604 15:16:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:37.604 15:16:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.604 15:16:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.604 15:16:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.604 15:16:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:08:37.604 15:16:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:37.604 15:16:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.604 15:16:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.862 [2024-11-20 15:16:24.116474] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:37.862 BaseBdev1 00:08:37.862 15:16:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.862 15:16:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:08:37.862 15:16:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:37.862 15:16:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:37.862 15:16:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:37.862 15:16:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:37.862 15:16:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:37.862 15:16:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:37.862 15:16:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.862 15:16:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.862 15:16:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.862 15:16:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:37.862 15:16:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.862 15:16:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.862 [ 00:08:37.862 { 00:08:37.862 "name": "BaseBdev1", 00:08:37.862 "aliases": [ 00:08:37.862 "8d9d3024-846b-45eb-93e1-683832b19645" 00:08:37.862 ], 00:08:37.862 "product_name": "Malloc disk", 00:08:37.862 "block_size": 512, 00:08:37.862 "num_blocks": 65536, 00:08:37.862 "uuid": "8d9d3024-846b-45eb-93e1-683832b19645", 00:08:37.862 "assigned_rate_limits": { 00:08:37.862 "rw_ios_per_sec": 0, 00:08:37.862 "rw_mbytes_per_sec": 0, 00:08:37.862 "r_mbytes_per_sec": 0, 00:08:37.862 "w_mbytes_per_sec": 0 00:08:37.862 }, 00:08:37.862 "claimed": true, 00:08:37.862 "claim_type": "exclusive_write", 00:08:37.862 "zoned": false, 00:08:37.862 "supported_io_types": { 00:08:37.862 "read": true, 00:08:37.862 "write": true, 00:08:37.862 "unmap": true, 00:08:37.862 "flush": true, 00:08:37.862 "reset": true, 00:08:37.862 "nvme_admin": false, 00:08:37.862 "nvme_io": false, 00:08:37.862 "nvme_io_md": false, 00:08:37.862 "write_zeroes": true, 00:08:37.862 "zcopy": true, 00:08:37.862 "get_zone_info": false, 00:08:37.862 "zone_management": false, 00:08:37.862 "zone_append": false, 00:08:37.862 "compare": false, 00:08:37.862 "compare_and_write": false, 00:08:37.862 "abort": true, 00:08:37.862 "seek_hole": false, 00:08:37.862 "seek_data": false, 00:08:37.862 "copy": true, 00:08:37.862 "nvme_iov_md": false 00:08:37.862 }, 00:08:37.862 "memory_domains": [ 00:08:37.862 { 00:08:37.862 "dma_device_id": "system", 00:08:37.862 "dma_device_type": 1 00:08:37.862 }, 00:08:37.862 { 00:08:37.862 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:37.862 "dma_device_type": 2 00:08:37.862 } 00:08:37.862 ], 00:08:37.862 "driver_specific": {} 00:08:37.862 } 00:08:37.862 ] 00:08:37.862 15:16:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.862 15:16:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:37.862 15:16:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:37.862 15:16:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:37.862 15:16:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:37.862 15:16:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:37.862 15:16:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:37.862 15:16:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:37.862 15:16:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:37.862 15:16:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:37.862 15:16:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:37.862 15:16:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:37.862 15:16:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:37.862 15:16:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:37.862 15:16:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.863 15:16:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.863 15:16:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.863 15:16:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:37.863 "name": "Existed_Raid", 00:08:37.863 "uuid": "7e4d1992-60cc-4253-baf6-75d313379405", 00:08:37.863 "strip_size_kb": 64, 00:08:37.863 "state": "configuring", 00:08:37.863 "raid_level": "raid0", 00:08:37.863 "superblock": true, 00:08:37.863 "num_base_bdevs": 3, 00:08:37.863 "num_base_bdevs_discovered": 2, 00:08:37.863 "num_base_bdevs_operational": 3, 00:08:37.863 "base_bdevs_list": [ 00:08:37.863 { 00:08:37.863 "name": "BaseBdev1", 00:08:37.863 "uuid": "8d9d3024-846b-45eb-93e1-683832b19645", 00:08:37.863 "is_configured": true, 00:08:37.863 "data_offset": 2048, 00:08:37.863 "data_size": 63488 00:08:37.863 }, 00:08:37.863 { 00:08:37.863 "name": null, 00:08:37.863 "uuid": "e7989db2-c3bb-44e6-87c4-68cda62e8f24", 00:08:37.863 "is_configured": false, 00:08:37.863 "data_offset": 0, 00:08:37.863 "data_size": 63488 00:08:37.863 }, 00:08:37.863 { 00:08:37.863 "name": "BaseBdev3", 00:08:37.863 "uuid": "c8d2b610-d395-4686-8f1d-7479f2931497", 00:08:37.863 "is_configured": true, 00:08:37.863 "data_offset": 2048, 00:08:37.863 "data_size": 63488 00:08:37.863 } 00:08:37.863 ] 00:08:37.863 }' 00:08:37.863 15:16:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:37.863 15:16:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.429 15:16:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:38.429 15:16:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:38.429 15:16:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.429 15:16:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.429 15:16:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.429 15:16:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:08:38.429 15:16:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:08:38.429 15:16:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.429 15:16:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.429 [2024-11-20 15:16:24.655798] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:38.429 15:16:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.429 15:16:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:38.429 15:16:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:38.429 15:16:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:38.429 15:16:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:38.429 15:16:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:38.429 15:16:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:38.429 15:16:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:38.429 15:16:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:38.429 15:16:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:38.429 15:16:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:38.429 15:16:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:38.429 15:16:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:38.429 15:16:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.429 15:16:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.429 15:16:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.429 15:16:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:38.429 "name": "Existed_Raid", 00:08:38.429 "uuid": "7e4d1992-60cc-4253-baf6-75d313379405", 00:08:38.429 "strip_size_kb": 64, 00:08:38.429 "state": "configuring", 00:08:38.429 "raid_level": "raid0", 00:08:38.429 "superblock": true, 00:08:38.429 "num_base_bdevs": 3, 00:08:38.429 "num_base_bdevs_discovered": 1, 00:08:38.429 "num_base_bdevs_operational": 3, 00:08:38.429 "base_bdevs_list": [ 00:08:38.429 { 00:08:38.429 "name": "BaseBdev1", 00:08:38.429 "uuid": "8d9d3024-846b-45eb-93e1-683832b19645", 00:08:38.429 "is_configured": true, 00:08:38.429 "data_offset": 2048, 00:08:38.429 "data_size": 63488 00:08:38.429 }, 00:08:38.429 { 00:08:38.429 "name": null, 00:08:38.429 "uuid": "e7989db2-c3bb-44e6-87c4-68cda62e8f24", 00:08:38.429 "is_configured": false, 00:08:38.429 "data_offset": 0, 00:08:38.429 "data_size": 63488 00:08:38.429 }, 00:08:38.429 { 00:08:38.429 "name": null, 00:08:38.429 "uuid": "c8d2b610-d395-4686-8f1d-7479f2931497", 00:08:38.429 "is_configured": false, 00:08:38.429 "data_offset": 0, 00:08:38.429 "data_size": 63488 00:08:38.429 } 00:08:38.429 ] 00:08:38.429 }' 00:08:38.429 15:16:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:38.429 15:16:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.689 15:16:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:38.689 15:16:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:38.689 15:16:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.689 15:16:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.689 15:16:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.689 15:16:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:08:38.689 15:16:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:08:38.689 15:16:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.689 15:16:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.689 [2024-11-20 15:16:25.167347] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:38.948 15:16:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.948 15:16:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:38.948 15:16:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:38.948 15:16:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:38.948 15:16:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:38.948 15:16:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:38.948 15:16:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:38.948 15:16:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:38.948 15:16:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:38.948 15:16:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:38.948 15:16:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:38.948 15:16:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:38.948 15:16:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:38.948 15:16:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.948 15:16:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.948 15:16:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.948 15:16:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:38.948 "name": "Existed_Raid", 00:08:38.948 "uuid": "7e4d1992-60cc-4253-baf6-75d313379405", 00:08:38.948 "strip_size_kb": 64, 00:08:38.948 "state": "configuring", 00:08:38.948 "raid_level": "raid0", 00:08:38.948 "superblock": true, 00:08:38.948 "num_base_bdevs": 3, 00:08:38.948 "num_base_bdevs_discovered": 2, 00:08:38.948 "num_base_bdevs_operational": 3, 00:08:38.948 "base_bdevs_list": [ 00:08:38.948 { 00:08:38.948 "name": "BaseBdev1", 00:08:38.948 "uuid": "8d9d3024-846b-45eb-93e1-683832b19645", 00:08:38.948 "is_configured": true, 00:08:38.948 "data_offset": 2048, 00:08:38.948 "data_size": 63488 00:08:38.948 }, 00:08:38.948 { 00:08:38.948 "name": null, 00:08:38.948 "uuid": "e7989db2-c3bb-44e6-87c4-68cda62e8f24", 00:08:38.948 "is_configured": false, 00:08:38.948 "data_offset": 0, 00:08:38.948 "data_size": 63488 00:08:38.948 }, 00:08:38.948 { 00:08:38.948 "name": "BaseBdev3", 00:08:38.948 "uuid": "c8d2b610-d395-4686-8f1d-7479f2931497", 00:08:38.948 "is_configured": true, 00:08:38.948 "data_offset": 2048, 00:08:38.948 "data_size": 63488 00:08:38.949 } 00:08:38.949 ] 00:08:38.949 }' 00:08:38.949 15:16:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:38.949 15:16:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.207 15:16:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:39.207 15:16:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:39.207 15:16:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.207 15:16:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.207 15:16:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.207 15:16:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:08:39.207 15:16:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:39.207 15:16:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.207 15:16:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.207 [2024-11-20 15:16:25.659373] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:39.467 15:16:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.467 15:16:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:39.467 15:16:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:39.467 15:16:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:39.467 15:16:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:39.467 15:16:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:39.467 15:16:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:39.467 15:16:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:39.467 15:16:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:39.467 15:16:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:39.467 15:16:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:39.467 15:16:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:39.467 15:16:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:39.467 15:16:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.467 15:16:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.467 15:16:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.467 15:16:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:39.467 "name": "Existed_Raid", 00:08:39.467 "uuid": "7e4d1992-60cc-4253-baf6-75d313379405", 00:08:39.467 "strip_size_kb": 64, 00:08:39.467 "state": "configuring", 00:08:39.467 "raid_level": "raid0", 00:08:39.467 "superblock": true, 00:08:39.467 "num_base_bdevs": 3, 00:08:39.467 "num_base_bdevs_discovered": 1, 00:08:39.467 "num_base_bdevs_operational": 3, 00:08:39.467 "base_bdevs_list": [ 00:08:39.467 { 00:08:39.467 "name": null, 00:08:39.467 "uuid": "8d9d3024-846b-45eb-93e1-683832b19645", 00:08:39.467 "is_configured": false, 00:08:39.467 "data_offset": 0, 00:08:39.467 "data_size": 63488 00:08:39.467 }, 00:08:39.467 { 00:08:39.467 "name": null, 00:08:39.467 "uuid": "e7989db2-c3bb-44e6-87c4-68cda62e8f24", 00:08:39.467 "is_configured": false, 00:08:39.467 "data_offset": 0, 00:08:39.467 "data_size": 63488 00:08:39.467 }, 00:08:39.467 { 00:08:39.467 "name": "BaseBdev3", 00:08:39.467 "uuid": "c8d2b610-d395-4686-8f1d-7479f2931497", 00:08:39.467 "is_configured": true, 00:08:39.467 "data_offset": 2048, 00:08:39.467 "data_size": 63488 00:08:39.467 } 00:08:39.467 ] 00:08:39.467 }' 00:08:39.467 15:16:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:39.467 15:16:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.728 15:16:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:39.728 15:16:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:39.728 15:16:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.728 15:16:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.032 15:16:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.032 15:16:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:08:40.032 15:16:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:08:40.032 15:16:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.032 15:16:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.032 [2024-11-20 15:16:26.227359] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:40.032 15:16:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.032 15:16:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:40.032 15:16:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:40.032 15:16:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:40.032 15:16:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:40.032 15:16:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:40.032 15:16:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:40.032 15:16:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:40.032 15:16:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:40.032 15:16:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:40.032 15:16:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:40.032 15:16:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:40.032 15:16:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:40.032 15:16:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.032 15:16:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.032 15:16:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.032 15:16:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:40.032 "name": "Existed_Raid", 00:08:40.032 "uuid": "7e4d1992-60cc-4253-baf6-75d313379405", 00:08:40.032 "strip_size_kb": 64, 00:08:40.032 "state": "configuring", 00:08:40.032 "raid_level": "raid0", 00:08:40.032 "superblock": true, 00:08:40.032 "num_base_bdevs": 3, 00:08:40.032 "num_base_bdevs_discovered": 2, 00:08:40.032 "num_base_bdevs_operational": 3, 00:08:40.032 "base_bdevs_list": [ 00:08:40.032 { 00:08:40.032 "name": null, 00:08:40.032 "uuid": "8d9d3024-846b-45eb-93e1-683832b19645", 00:08:40.032 "is_configured": false, 00:08:40.032 "data_offset": 0, 00:08:40.032 "data_size": 63488 00:08:40.032 }, 00:08:40.032 { 00:08:40.032 "name": "BaseBdev2", 00:08:40.032 "uuid": "e7989db2-c3bb-44e6-87c4-68cda62e8f24", 00:08:40.032 "is_configured": true, 00:08:40.032 "data_offset": 2048, 00:08:40.032 "data_size": 63488 00:08:40.032 }, 00:08:40.032 { 00:08:40.032 "name": "BaseBdev3", 00:08:40.032 "uuid": "c8d2b610-d395-4686-8f1d-7479f2931497", 00:08:40.032 "is_configured": true, 00:08:40.032 "data_offset": 2048, 00:08:40.032 "data_size": 63488 00:08:40.032 } 00:08:40.032 ] 00:08:40.032 }' 00:08:40.032 15:16:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:40.032 15:16:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.322 15:16:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:40.322 15:16:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.322 15:16:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:40.322 15:16:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.322 15:16:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.322 15:16:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:08:40.322 15:16:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:40.322 15:16:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:08:40.322 15:16:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.322 15:16:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.322 15:16:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.322 15:16:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 8d9d3024-846b-45eb-93e1-683832b19645 00:08:40.322 15:16:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.322 15:16:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.580 [2024-11-20 15:16:26.818431] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:08:40.580 [2024-11-20 15:16:26.818686] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:08:40.580 [2024-11-20 15:16:26.818713] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:40.580 NewBaseBdev 00:08:40.580 [2024-11-20 15:16:26.818965] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:40.580 [2024-11-20 15:16:26.819108] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:08:40.580 [2024-11-20 15:16:26.819118] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:08:40.580 [2024-11-20 15:16:26.819261] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:40.580 15:16:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.580 15:16:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:08:40.580 15:16:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:08:40.580 15:16:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:40.580 15:16:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:40.580 15:16:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:40.580 15:16:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:40.580 15:16:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:40.580 15:16:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.580 15:16:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.580 15:16:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.580 15:16:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:08:40.580 15:16:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.580 15:16:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.580 [ 00:08:40.580 { 00:08:40.580 "name": "NewBaseBdev", 00:08:40.580 "aliases": [ 00:08:40.580 "8d9d3024-846b-45eb-93e1-683832b19645" 00:08:40.580 ], 00:08:40.580 "product_name": "Malloc disk", 00:08:40.580 "block_size": 512, 00:08:40.580 "num_blocks": 65536, 00:08:40.580 "uuid": "8d9d3024-846b-45eb-93e1-683832b19645", 00:08:40.580 "assigned_rate_limits": { 00:08:40.580 "rw_ios_per_sec": 0, 00:08:40.580 "rw_mbytes_per_sec": 0, 00:08:40.580 "r_mbytes_per_sec": 0, 00:08:40.580 "w_mbytes_per_sec": 0 00:08:40.580 }, 00:08:40.580 "claimed": true, 00:08:40.580 "claim_type": "exclusive_write", 00:08:40.580 "zoned": false, 00:08:40.580 "supported_io_types": { 00:08:40.580 "read": true, 00:08:40.580 "write": true, 00:08:40.580 "unmap": true, 00:08:40.580 "flush": true, 00:08:40.580 "reset": true, 00:08:40.580 "nvme_admin": false, 00:08:40.580 "nvme_io": false, 00:08:40.580 "nvme_io_md": false, 00:08:40.580 "write_zeroes": true, 00:08:40.580 "zcopy": true, 00:08:40.580 "get_zone_info": false, 00:08:40.580 "zone_management": false, 00:08:40.580 "zone_append": false, 00:08:40.580 "compare": false, 00:08:40.581 "compare_and_write": false, 00:08:40.581 "abort": true, 00:08:40.581 "seek_hole": false, 00:08:40.581 "seek_data": false, 00:08:40.581 "copy": true, 00:08:40.581 "nvme_iov_md": false 00:08:40.581 }, 00:08:40.581 "memory_domains": [ 00:08:40.581 { 00:08:40.581 "dma_device_id": "system", 00:08:40.581 "dma_device_type": 1 00:08:40.581 }, 00:08:40.581 { 00:08:40.581 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:40.581 "dma_device_type": 2 00:08:40.581 } 00:08:40.581 ], 00:08:40.581 "driver_specific": {} 00:08:40.581 } 00:08:40.581 ] 00:08:40.581 15:16:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.581 15:16:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:40.581 15:16:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:08:40.581 15:16:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:40.581 15:16:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:40.581 15:16:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:40.581 15:16:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:40.581 15:16:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:40.581 15:16:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:40.581 15:16:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:40.581 15:16:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:40.581 15:16:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:40.581 15:16:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:40.581 15:16:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:40.581 15:16:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.581 15:16:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.581 15:16:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.581 15:16:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:40.581 "name": "Existed_Raid", 00:08:40.581 "uuid": "7e4d1992-60cc-4253-baf6-75d313379405", 00:08:40.581 "strip_size_kb": 64, 00:08:40.581 "state": "online", 00:08:40.581 "raid_level": "raid0", 00:08:40.581 "superblock": true, 00:08:40.581 "num_base_bdevs": 3, 00:08:40.581 "num_base_bdevs_discovered": 3, 00:08:40.581 "num_base_bdevs_operational": 3, 00:08:40.581 "base_bdevs_list": [ 00:08:40.581 { 00:08:40.581 "name": "NewBaseBdev", 00:08:40.581 "uuid": "8d9d3024-846b-45eb-93e1-683832b19645", 00:08:40.581 "is_configured": true, 00:08:40.581 "data_offset": 2048, 00:08:40.581 "data_size": 63488 00:08:40.581 }, 00:08:40.581 { 00:08:40.581 "name": "BaseBdev2", 00:08:40.581 "uuid": "e7989db2-c3bb-44e6-87c4-68cda62e8f24", 00:08:40.581 "is_configured": true, 00:08:40.581 "data_offset": 2048, 00:08:40.581 "data_size": 63488 00:08:40.581 }, 00:08:40.581 { 00:08:40.581 "name": "BaseBdev3", 00:08:40.581 "uuid": "c8d2b610-d395-4686-8f1d-7479f2931497", 00:08:40.581 "is_configured": true, 00:08:40.581 "data_offset": 2048, 00:08:40.581 "data_size": 63488 00:08:40.581 } 00:08:40.581 ] 00:08:40.581 }' 00:08:40.581 15:16:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:40.581 15:16:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.840 15:16:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:08:40.840 15:16:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:40.840 15:16:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:40.840 15:16:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:40.840 15:16:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:40.840 15:16:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:40.840 15:16:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:40.840 15:16:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.840 15:16:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.840 15:16:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:40.840 [2024-11-20 15:16:27.278188] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:40.840 15:16:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.840 15:16:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:40.840 "name": "Existed_Raid", 00:08:40.840 "aliases": [ 00:08:40.840 "7e4d1992-60cc-4253-baf6-75d313379405" 00:08:40.840 ], 00:08:40.840 "product_name": "Raid Volume", 00:08:40.840 "block_size": 512, 00:08:40.840 "num_blocks": 190464, 00:08:40.840 "uuid": "7e4d1992-60cc-4253-baf6-75d313379405", 00:08:40.840 "assigned_rate_limits": { 00:08:40.840 "rw_ios_per_sec": 0, 00:08:40.840 "rw_mbytes_per_sec": 0, 00:08:40.840 "r_mbytes_per_sec": 0, 00:08:40.840 "w_mbytes_per_sec": 0 00:08:40.840 }, 00:08:40.840 "claimed": false, 00:08:40.840 "zoned": false, 00:08:40.840 "supported_io_types": { 00:08:40.840 "read": true, 00:08:40.840 "write": true, 00:08:40.840 "unmap": true, 00:08:40.840 "flush": true, 00:08:40.840 "reset": true, 00:08:40.840 "nvme_admin": false, 00:08:40.840 "nvme_io": false, 00:08:40.840 "nvme_io_md": false, 00:08:40.840 "write_zeroes": true, 00:08:40.840 "zcopy": false, 00:08:40.840 "get_zone_info": false, 00:08:40.840 "zone_management": false, 00:08:40.840 "zone_append": false, 00:08:40.840 "compare": false, 00:08:40.840 "compare_and_write": false, 00:08:40.840 "abort": false, 00:08:40.840 "seek_hole": false, 00:08:40.840 "seek_data": false, 00:08:40.840 "copy": false, 00:08:40.840 "nvme_iov_md": false 00:08:40.840 }, 00:08:40.840 "memory_domains": [ 00:08:40.840 { 00:08:40.840 "dma_device_id": "system", 00:08:40.840 "dma_device_type": 1 00:08:40.840 }, 00:08:40.840 { 00:08:40.840 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:40.840 "dma_device_type": 2 00:08:40.840 }, 00:08:40.840 { 00:08:40.840 "dma_device_id": "system", 00:08:40.840 "dma_device_type": 1 00:08:40.840 }, 00:08:40.840 { 00:08:40.840 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:40.840 "dma_device_type": 2 00:08:40.840 }, 00:08:40.840 { 00:08:40.840 "dma_device_id": "system", 00:08:40.840 "dma_device_type": 1 00:08:40.840 }, 00:08:40.840 { 00:08:40.840 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:40.840 "dma_device_type": 2 00:08:40.840 } 00:08:40.840 ], 00:08:40.840 "driver_specific": { 00:08:40.840 "raid": { 00:08:40.840 "uuid": "7e4d1992-60cc-4253-baf6-75d313379405", 00:08:40.840 "strip_size_kb": 64, 00:08:40.840 "state": "online", 00:08:40.840 "raid_level": "raid0", 00:08:40.840 "superblock": true, 00:08:40.840 "num_base_bdevs": 3, 00:08:40.840 "num_base_bdevs_discovered": 3, 00:08:40.840 "num_base_bdevs_operational": 3, 00:08:40.840 "base_bdevs_list": [ 00:08:40.840 { 00:08:40.840 "name": "NewBaseBdev", 00:08:40.840 "uuid": "8d9d3024-846b-45eb-93e1-683832b19645", 00:08:40.840 "is_configured": true, 00:08:40.840 "data_offset": 2048, 00:08:40.840 "data_size": 63488 00:08:40.840 }, 00:08:40.840 { 00:08:40.840 "name": "BaseBdev2", 00:08:40.840 "uuid": "e7989db2-c3bb-44e6-87c4-68cda62e8f24", 00:08:40.840 "is_configured": true, 00:08:40.840 "data_offset": 2048, 00:08:40.840 "data_size": 63488 00:08:40.840 }, 00:08:40.840 { 00:08:40.840 "name": "BaseBdev3", 00:08:40.840 "uuid": "c8d2b610-d395-4686-8f1d-7479f2931497", 00:08:40.840 "is_configured": true, 00:08:40.840 "data_offset": 2048, 00:08:40.840 "data_size": 63488 00:08:40.840 } 00:08:40.840 ] 00:08:40.840 } 00:08:40.840 } 00:08:40.840 }' 00:08:40.840 15:16:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:41.099 15:16:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:08:41.099 BaseBdev2 00:08:41.099 BaseBdev3' 00:08:41.099 15:16:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:41.099 15:16:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:41.099 15:16:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:41.099 15:16:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:41.099 15:16:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:08:41.099 15:16:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.099 15:16:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:41.099 15:16:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.099 15:16:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:41.099 15:16:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:41.099 15:16:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:41.099 15:16:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:41.099 15:16:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.099 15:16:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:41.099 15:16:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:41.099 15:16:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.099 15:16:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:41.099 15:16:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:41.099 15:16:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:41.099 15:16:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:41.099 15:16:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.099 15:16:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:41.099 15:16:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:41.099 15:16:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.099 15:16:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:41.099 15:16:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:41.099 15:16:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:41.099 15:16:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.099 15:16:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:41.099 [2024-11-20 15:16:27.525559] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:41.099 [2024-11-20 15:16:27.525714] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:41.099 [2024-11-20 15:16:27.525815] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:41.099 [2024-11-20 15:16:27.525872] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:41.099 [2024-11-20 15:16:27.525887] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:08:41.099 15:16:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.099 15:16:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 64315 00:08:41.099 15:16:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 64315 ']' 00:08:41.099 15:16:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 64315 00:08:41.099 15:16:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:08:41.099 15:16:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:41.099 15:16:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64315 00:08:41.099 15:16:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:41.099 15:16:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:41.099 15:16:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64315' 00:08:41.099 killing process with pid 64315 00:08:41.099 15:16:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 64315 00:08:41.099 [2024-11-20 15:16:27.573212] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:41.099 15:16:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 64315 00:08:41.665 [2024-11-20 15:16:27.892530] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:42.600 15:16:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:08:42.600 00:08:42.600 real 0m10.574s 00:08:42.600 user 0m16.764s 00:08:42.600 sys 0m2.009s 00:08:42.600 15:16:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:42.600 15:16:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:42.600 ************************************ 00:08:42.600 END TEST raid_state_function_test_sb 00:08:42.600 ************************************ 00:08:42.858 15:16:29 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:08:42.858 15:16:29 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:42.858 15:16:29 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:42.858 15:16:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:42.858 ************************************ 00:08:42.858 START TEST raid_superblock_test 00:08:42.858 ************************************ 00:08:42.858 15:16:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 3 00:08:42.858 15:16:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:08:42.858 15:16:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:08:42.858 15:16:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:08:42.858 15:16:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:08:42.858 15:16:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:08:42.858 15:16:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:08:42.858 15:16:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:08:42.858 15:16:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:08:42.858 15:16:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:08:42.858 15:16:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:08:42.858 15:16:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:08:42.858 15:16:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:08:42.858 15:16:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:08:42.858 15:16:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:08:42.858 15:16:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:08:42.858 15:16:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:08:42.858 15:16:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=64935 00:08:42.858 15:16:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 64935 00:08:42.858 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:42.858 15:16:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 64935 ']' 00:08:42.858 15:16:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:42.858 15:16:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:08:42.858 15:16:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:42.858 15:16:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:42.858 15:16:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:42.858 15:16:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.858 [2024-11-20 15:16:29.235397] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:08:42.858 [2024-11-20 15:16:29.235539] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64935 ] 00:08:43.116 [2024-11-20 15:16:29.420987] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:43.116 [2024-11-20 15:16:29.551627] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:43.375 [2024-11-20 15:16:29.773732] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:43.375 [2024-11-20 15:16:29.773783] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:43.634 15:16:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:43.634 15:16:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:08:43.634 15:16:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:08:43.634 15:16:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:43.634 15:16:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:08:43.634 15:16:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:08:43.634 15:16:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:08:43.634 15:16:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:43.634 15:16:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:43.634 15:16:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:43.634 15:16:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:08:43.634 15:16:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.634 15:16:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.893 malloc1 00:08:43.893 15:16:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.893 15:16:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:43.893 15:16:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.893 15:16:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.893 [2024-11-20 15:16:30.163622] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:43.893 [2024-11-20 15:16:30.163712] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:43.893 [2024-11-20 15:16:30.163740] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:43.893 [2024-11-20 15:16:30.163753] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:43.893 [2024-11-20 15:16:30.166450] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:43.893 [2024-11-20 15:16:30.166497] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:43.893 pt1 00:08:43.893 15:16:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.893 15:16:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:43.893 15:16:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:43.893 15:16:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:08:43.893 15:16:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:08:43.894 15:16:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:08:43.894 15:16:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:43.894 15:16:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:43.894 15:16:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:43.894 15:16:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:08:43.894 15:16:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.894 15:16:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.894 malloc2 00:08:43.894 15:16:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.894 15:16:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:43.894 15:16:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.894 15:16:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.894 [2024-11-20 15:16:30.218736] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:43.894 [2024-11-20 15:16:30.218820] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:43.894 [2024-11-20 15:16:30.218857] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:08:43.894 [2024-11-20 15:16:30.218872] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:43.894 [2024-11-20 15:16:30.221542] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:43.894 [2024-11-20 15:16:30.221596] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:43.894 pt2 00:08:43.894 15:16:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.894 15:16:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:43.894 15:16:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:43.894 15:16:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:08:43.894 15:16:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:08:43.894 15:16:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:08:43.894 15:16:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:43.894 15:16:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:43.894 15:16:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:43.894 15:16:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:08:43.894 15:16:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.894 15:16:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.894 malloc3 00:08:43.894 15:16:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.894 15:16:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:08:43.894 15:16:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.894 15:16:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.894 [2024-11-20 15:16:30.284760] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:08:43.894 [2024-11-20 15:16:30.284838] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:43.894 [2024-11-20 15:16:30.284866] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:08:43.894 [2024-11-20 15:16:30.284878] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:43.894 [2024-11-20 15:16:30.287505] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:43.894 [2024-11-20 15:16:30.287559] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:08:43.894 pt3 00:08:43.894 15:16:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.894 15:16:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:43.894 15:16:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:43.894 15:16:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:08:43.894 15:16:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.894 15:16:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.894 [2024-11-20 15:16:30.296832] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:43.894 [2024-11-20 15:16:30.299096] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:43.894 [2024-11-20 15:16:30.299423] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:08:43.894 [2024-11-20 15:16:30.299628] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:08:43.894 [2024-11-20 15:16:30.299648] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:43.894 [2024-11-20 15:16:30.299996] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:43.894 [2024-11-20 15:16:30.300193] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:08:43.894 [2024-11-20 15:16:30.300205] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:08:43.894 [2024-11-20 15:16:30.300413] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:43.894 15:16:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.894 15:16:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:43.894 15:16:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:43.894 15:16:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:43.894 15:16:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:43.894 15:16:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:43.894 15:16:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:43.894 15:16:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:43.894 15:16:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:43.894 15:16:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:43.894 15:16:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:43.894 15:16:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:43.894 15:16:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.894 15:16:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.894 15:16:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:43.894 15:16:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.894 15:16:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:43.894 "name": "raid_bdev1", 00:08:43.894 "uuid": "ed36e14d-53d5-4a8e-86ad-5b3e8296edb2", 00:08:43.894 "strip_size_kb": 64, 00:08:43.894 "state": "online", 00:08:43.894 "raid_level": "raid0", 00:08:43.894 "superblock": true, 00:08:43.894 "num_base_bdevs": 3, 00:08:43.894 "num_base_bdevs_discovered": 3, 00:08:43.894 "num_base_bdevs_operational": 3, 00:08:43.894 "base_bdevs_list": [ 00:08:43.894 { 00:08:43.894 "name": "pt1", 00:08:43.894 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:43.894 "is_configured": true, 00:08:43.894 "data_offset": 2048, 00:08:43.894 "data_size": 63488 00:08:43.894 }, 00:08:43.894 { 00:08:43.894 "name": "pt2", 00:08:43.894 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:43.894 "is_configured": true, 00:08:43.894 "data_offset": 2048, 00:08:43.894 "data_size": 63488 00:08:43.894 }, 00:08:43.894 { 00:08:43.894 "name": "pt3", 00:08:43.894 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:43.894 "is_configured": true, 00:08:43.894 "data_offset": 2048, 00:08:43.894 "data_size": 63488 00:08:43.894 } 00:08:43.894 ] 00:08:43.894 }' 00:08:43.894 15:16:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:43.894 15:16:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.461 15:16:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:08:44.461 15:16:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:44.461 15:16:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:44.461 15:16:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:44.461 15:16:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:44.461 15:16:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:44.461 15:16:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:44.461 15:16:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.461 15:16:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.461 15:16:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:44.461 [2024-11-20 15:16:30.720444] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:44.461 15:16:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.461 15:16:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:44.461 "name": "raid_bdev1", 00:08:44.462 "aliases": [ 00:08:44.462 "ed36e14d-53d5-4a8e-86ad-5b3e8296edb2" 00:08:44.462 ], 00:08:44.462 "product_name": "Raid Volume", 00:08:44.462 "block_size": 512, 00:08:44.462 "num_blocks": 190464, 00:08:44.462 "uuid": "ed36e14d-53d5-4a8e-86ad-5b3e8296edb2", 00:08:44.462 "assigned_rate_limits": { 00:08:44.462 "rw_ios_per_sec": 0, 00:08:44.462 "rw_mbytes_per_sec": 0, 00:08:44.462 "r_mbytes_per_sec": 0, 00:08:44.462 "w_mbytes_per_sec": 0 00:08:44.462 }, 00:08:44.462 "claimed": false, 00:08:44.462 "zoned": false, 00:08:44.462 "supported_io_types": { 00:08:44.462 "read": true, 00:08:44.462 "write": true, 00:08:44.462 "unmap": true, 00:08:44.462 "flush": true, 00:08:44.462 "reset": true, 00:08:44.462 "nvme_admin": false, 00:08:44.462 "nvme_io": false, 00:08:44.462 "nvme_io_md": false, 00:08:44.462 "write_zeroes": true, 00:08:44.462 "zcopy": false, 00:08:44.462 "get_zone_info": false, 00:08:44.462 "zone_management": false, 00:08:44.462 "zone_append": false, 00:08:44.462 "compare": false, 00:08:44.462 "compare_and_write": false, 00:08:44.462 "abort": false, 00:08:44.462 "seek_hole": false, 00:08:44.462 "seek_data": false, 00:08:44.462 "copy": false, 00:08:44.462 "nvme_iov_md": false 00:08:44.462 }, 00:08:44.462 "memory_domains": [ 00:08:44.462 { 00:08:44.462 "dma_device_id": "system", 00:08:44.462 "dma_device_type": 1 00:08:44.462 }, 00:08:44.462 { 00:08:44.462 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:44.462 "dma_device_type": 2 00:08:44.462 }, 00:08:44.462 { 00:08:44.462 "dma_device_id": "system", 00:08:44.462 "dma_device_type": 1 00:08:44.462 }, 00:08:44.462 { 00:08:44.462 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:44.462 "dma_device_type": 2 00:08:44.462 }, 00:08:44.462 { 00:08:44.462 "dma_device_id": "system", 00:08:44.462 "dma_device_type": 1 00:08:44.462 }, 00:08:44.462 { 00:08:44.462 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:44.462 "dma_device_type": 2 00:08:44.462 } 00:08:44.462 ], 00:08:44.462 "driver_specific": { 00:08:44.462 "raid": { 00:08:44.462 "uuid": "ed36e14d-53d5-4a8e-86ad-5b3e8296edb2", 00:08:44.462 "strip_size_kb": 64, 00:08:44.462 "state": "online", 00:08:44.462 "raid_level": "raid0", 00:08:44.462 "superblock": true, 00:08:44.462 "num_base_bdevs": 3, 00:08:44.462 "num_base_bdevs_discovered": 3, 00:08:44.462 "num_base_bdevs_operational": 3, 00:08:44.462 "base_bdevs_list": [ 00:08:44.462 { 00:08:44.462 "name": "pt1", 00:08:44.462 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:44.462 "is_configured": true, 00:08:44.462 "data_offset": 2048, 00:08:44.462 "data_size": 63488 00:08:44.462 }, 00:08:44.462 { 00:08:44.462 "name": "pt2", 00:08:44.462 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:44.462 "is_configured": true, 00:08:44.462 "data_offset": 2048, 00:08:44.462 "data_size": 63488 00:08:44.462 }, 00:08:44.462 { 00:08:44.462 "name": "pt3", 00:08:44.462 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:44.462 "is_configured": true, 00:08:44.462 "data_offset": 2048, 00:08:44.462 "data_size": 63488 00:08:44.462 } 00:08:44.462 ] 00:08:44.462 } 00:08:44.462 } 00:08:44.462 }' 00:08:44.462 15:16:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:44.462 15:16:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:44.462 pt2 00:08:44.462 pt3' 00:08:44.462 15:16:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:44.462 15:16:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:44.462 15:16:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:44.462 15:16:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:44.462 15:16:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:44.462 15:16:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.462 15:16:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.462 15:16:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.462 15:16:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:44.462 15:16:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:44.462 15:16:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:44.462 15:16:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:44.462 15:16:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.462 15:16:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:44.462 15:16:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.462 15:16:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.462 15:16:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:44.462 15:16:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:44.462 15:16:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:44.462 15:16:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:44.462 15:16:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:08:44.462 15:16:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.462 15:16:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.737 15:16:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.737 15:16:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:44.737 15:16:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:44.737 15:16:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:44.737 15:16:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:08:44.737 15:16:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.737 15:16:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.737 [2024-11-20 15:16:30.960073] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:44.737 15:16:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.737 15:16:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=ed36e14d-53d5-4a8e-86ad-5b3e8296edb2 00:08:44.737 15:16:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z ed36e14d-53d5-4a8e-86ad-5b3e8296edb2 ']' 00:08:44.737 15:16:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:44.737 15:16:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.737 15:16:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.737 [2024-11-20 15:16:30.999714] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:44.737 [2024-11-20 15:16:30.999746] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:44.737 [2024-11-20 15:16:30.999829] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:44.737 [2024-11-20 15:16:30.999891] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:44.738 [2024-11-20 15:16:30.999903] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:08:44.738 15:16:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.738 15:16:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:08:44.738 15:16:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:44.738 15:16:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.738 15:16:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.738 15:16:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.738 15:16:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:08:44.738 15:16:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:08:44.738 15:16:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:44.738 15:16:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:08:44.738 15:16:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.738 15:16:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.739 15:16:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.739 15:16:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:44.739 15:16:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:08:44.739 15:16:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.739 15:16:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.739 15:16:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.739 15:16:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:44.739 15:16:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:08:44.739 15:16:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.739 15:16:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.739 15:16:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.739 15:16:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:08:44.739 15:16:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:08:44.739 15:16:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.739 15:16:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.739 15:16:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.739 15:16:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:08:44.739 15:16:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:44.739 15:16:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:08:44.739 15:16:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:44.739 15:16:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:08:44.739 15:16:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:44.739 15:16:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:08:44.739 15:16:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:44.739 15:16:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:44.739 15:16:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.739 15:16:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.739 [2024-11-20 15:16:31.127589] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:08:44.739 [2024-11-20 15:16:31.129924] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:08:44.739 [2024-11-20 15:16:31.129983] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:08:44.739 [2024-11-20 15:16:31.130039] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:08:44.739 [2024-11-20 15:16:31.130099] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:08:44.739 [2024-11-20 15:16:31.130125] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:08:44.739 [2024-11-20 15:16:31.130149] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:44.739 [2024-11-20 15:16:31.130166] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:08:44.739 request: 00:08:44.739 { 00:08:44.739 "name": "raid_bdev1", 00:08:44.739 "raid_level": "raid0", 00:08:44.739 "base_bdevs": [ 00:08:44.739 "malloc1", 00:08:44.739 "malloc2", 00:08:44.739 "malloc3" 00:08:44.739 ], 00:08:44.739 "strip_size_kb": 64, 00:08:44.739 "superblock": false, 00:08:44.739 "method": "bdev_raid_create", 00:08:44.739 "req_id": 1 00:08:44.739 } 00:08:44.739 Got JSON-RPC error response 00:08:44.739 response: 00:08:44.739 { 00:08:44.739 "code": -17, 00:08:44.739 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:08:44.739 } 00:08:44.739 15:16:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:08:44.739 15:16:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:08:44.739 15:16:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:44.739 15:16:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:44.739 15:16:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:44.739 15:16:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:08:44.739 15:16:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:44.739 15:16:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.739 15:16:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.739 15:16:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.739 15:16:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:08:44.739 15:16:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:08:44.739 15:16:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:44.739 15:16:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.739 15:16:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.739 [2024-11-20 15:16:31.175471] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:44.739 [2024-11-20 15:16:31.175700] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:44.739 [2024-11-20 15:16:31.175789] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:08:44.739 [2024-11-20 15:16:31.175880] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:44.739 [2024-11-20 15:16:31.178932] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:44.739 [2024-11-20 15:16:31.179104] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:44.739 [2024-11-20 15:16:31.179350] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:44.739 [2024-11-20 15:16:31.179472] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:44.739 pt1 00:08:44.739 15:16:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.739 15:16:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:08:44.739 15:16:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:44.739 15:16:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:44.739 15:16:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:44.739 15:16:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:44.739 15:16:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:44.739 15:16:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:44.739 15:16:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:44.739 15:16:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:44.739 15:16:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:44.739 15:16:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:44.739 15:16:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.739 15:16:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.739 15:16:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:44.739 15:16:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.053 15:16:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:45.053 "name": "raid_bdev1", 00:08:45.053 "uuid": "ed36e14d-53d5-4a8e-86ad-5b3e8296edb2", 00:08:45.053 "strip_size_kb": 64, 00:08:45.053 "state": "configuring", 00:08:45.053 "raid_level": "raid0", 00:08:45.053 "superblock": true, 00:08:45.053 "num_base_bdevs": 3, 00:08:45.053 "num_base_bdevs_discovered": 1, 00:08:45.053 "num_base_bdevs_operational": 3, 00:08:45.053 "base_bdevs_list": [ 00:08:45.053 { 00:08:45.053 "name": "pt1", 00:08:45.053 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:45.053 "is_configured": true, 00:08:45.053 "data_offset": 2048, 00:08:45.053 "data_size": 63488 00:08:45.053 }, 00:08:45.053 { 00:08:45.053 "name": null, 00:08:45.053 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:45.053 "is_configured": false, 00:08:45.053 "data_offset": 2048, 00:08:45.053 "data_size": 63488 00:08:45.053 }, 00:08:45.053 { 00:08:45.053 "name": null, 00:08:45.053 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:45.053 "is_configured": false, 00:08:45.053 "data_offset": 2048, 00:08:45.053 "data_size": 63488 00:08:45.053 } 00:08:45.053 ] 00:08:45.053 }' 00:08:45.053 15:16:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:45.053 15:16:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.053 15:16:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:08:45.053 15:16:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:45.053 15:16:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.053 15:16:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.053 [2024-11-20 15:16:31.527363] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:45.053 [2024-11-20 15:16:31.527439] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:45.053 [2024-11-20 15:16:31.527472] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:08:45.053 [2024-11-20 15:16:31.527484] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:45.053 [2024-11-20 15:16:31.527996] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:45.053 [2024-11-20 15:16:31.528016] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:45.053 [2024-11-20 15:16:31.528110] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:45.053 [2024-11-20 15:16:31.528140] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:45.053 pt2 00:08:45.053 15:16:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.053 15:16:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:08:45.053 15:16:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.053 15:16:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.311 [2024-11-20 15:16:31.539353] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:08:45.311 15:16:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.311 15:16:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:08:45.311 15:16:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:45.311 15:16:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:45.311 15:16:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:45.311 15:16:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:45.311 15:16:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:45.311 15:16:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:45.311 15:16:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:45.311 15:16:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:45.311 15:16:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:45.311 15:16:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:45.311 15:16:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:45.311 15:16:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.311 15:16:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.312 15:16:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.312 15:16:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:45.312 "name": "raid_bdev1", 00:08:45.312 "uuid": "ed36e14d-53d5-4a8e-86ad-5b3e8296edb2", 00:08:45.312 "strip_size_kb": 64, 00:08:45.312 "state": "configuring", 00:08:45.312 "raid_level": "raid0", 00:08:45.312 "superblock": true, 00:08:45.312 "num_base_bdevs": 3, 00:08:45.312 "num_base_bdevs_discovered": 1, 00:08:45.312 "num_base_bdevs_operational": 3, 00:08:45.312 "base_bdevs_list": [ 00:08:45.312 { 00:08:45.312 "name": "pt1", 00:08:45.312 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:45.312 "is_configured": true, 00:08:45.312 "data_offset": 2048, 00:08:45.312 "data_size": 63488 00:08:45.312 }, 00:08:45.312 { 00:08:45.312 "name": null, 00:08:45.312 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:45.312 "is_configured": false, 00:08:45.312 "data_offset": 0, 00:08:45.312 "data_size": 63488 00:08:45.312 }, 00:08:45.312 { 00:08:45.312 "name": null, 00:08:45.312 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:45.312 "is_configured": false, 00:08:45.312 "data_offset": 2048, 00:08:45.312 "data_size": 63488 00:08:45.312 } 00:08:45.312 ] 00:08:45.312 }' 00:08:45.312 15:16:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:45.312 15:16:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.570 15:16:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:08:45.570 15:16:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:45.570 15:16:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:45.570 15:16:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.570 15:16:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.570 [2024-11-20 15:16:31.959335] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:45.570 [2024-11-20 15:16:31.959416] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:45.570 [2024-11-20 15:16:31.959439] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:08:45.570 [2024-11-20 15:16:31.959454] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:45.570 [2024-11-20 15:16:31.959955] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:45.570 [2024-11-20 15:16:31.959987] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:45.570 [2024-11-20 15:16:31.960095] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:45.570 [2024-11-20 15:16:31.960134] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:45.570 pt2 00:08:45.570 15:16:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.570 15:16:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:45.570 15:16:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:45.570 15:16:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:08:45.570 15:16:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.570 15:16:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.570 [2024-11-20 15:16:31.971301] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:08:45.570 [2024-11-20 15:16:31.971363] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:45.570 [2024-11-20 15:16:31.971384] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:08:45.570 [2024-11-20 15:16:31.971399] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:45.570 [2024-11-20 15:16:31.971886] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:45.570 [2024-11-20 15:16:31.971917] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:08:45.570 [2024-11-20 15:16:31.971996] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:08:45.570 [2024-11-20 15:16:31.972027] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:08:45.570 [2024-11-20 15:16:31.972171] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:45.570 [2024-11-20 15:16:31.972190] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:45.570 [2024-11-20 15:16:31.972508] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:08:45.570 [2024-11-20 15:16:31.972714] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:45.570 [2024-11-20 15:16:31.972737] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:45.570 [2024-11-20 15:16:31.972923] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:45.570 pt3 00:08:45.570 15:16:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.570 15:16:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:45.570 15:16:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:45.570 15:16:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:45.570 15:16:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:45.570 15:16:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:45.570 15:16:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:45.570 15:16:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:45.570 15:16:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:45.570 15:16:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:45.570 15:16:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:45.570 15:16:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:45.570 15:16:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:45.570 15:16:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:45.570 15:16:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:45.570 15:16:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.570 15:16:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.570 15:16:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.570 15:16:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:45.570 "name": "raid_bdev1", 00:08:45.570 "uuid": "ed36e14d-53d5-4a8e-86ad-5b3e8296edb2", 00:08:45.570 "strip_size_kb": 64, 00:08:45.570 "state": "online", 00:08:45.570 "raid_level": "raid0", 00:08:45.570 "superblock": true, 00:08:45.570 "num_base_bdevs": 3, 00:08:45.570 "num_base_bdevs_discovered": 3, 00:08:45.570 "num_base_bdevs_operational": 3, 00:08:45.570 "base_bdevs_list": [ 00:08:45.570 { 00:08:45.570 "name": "pt1", 00:08:45.570 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:45.570 "is_configured": true, 00:08:45.570 "data_offset": 2048, 00:08:45.570 "data_size": 63488 00:08:45.570 }, 00:08:45.570 { 00:08:45.570 "name": "pt2", 00:08:45.570 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:45.570 "is_configured": true, 00:08:45.570 "data_offset": 2048, 00:08:45.570 "data_size": 63488 00:08:45.570 }, 00:08:45.570 { 00:08:45.570 "name": "pt3", 00:08:45.570 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:45.570 "is_configured": true, 00:08:45.570 "data_offset": 2048, 00:08:45.570 "data_size": 63488 00:08:45.570 } 00:08:45.570 ] 00:08:45.570 }' 00:08:45.570 15:16:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:45.570 15:16:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.139 15:16:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:08:46.139 15:16:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:46.139 15:16:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:46.139 15:16:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:46.139 15:16:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:46.139 15:16:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:46.139 15:16:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:46.139 15:16:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:46.139 15:16:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.139 15:16:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.139 [2024-11-20 15:16:32.367622] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:46.139 15:16:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.139 15:16:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:46.139 "name": "raid_bdev1", 00:08:46.139 "aliases": [ 00:08:46.139 "ed36e14d-53d5-4a8e-86ad-5b3e8296edb2" 00:08:46.139 ], 00:08:46.139 "product_name": "Raid Volume", 00:08:46.139 "block_size": 512, 00:08:46.139 "num_blocks": 190464, 00:08:46.139 "uuid": "ed36e14d-53d5-4a8e-86ad-5b3e8296edb2", 00:08:46.139 "assigned_rate_limits": { 00:08:46.139 "rw_ios_per_sec": 0, 00:08:46.139 "rw_mbytes_per_sec": 0, 00:08:46.139 "r_mbytes_per_sec": 0, 00:08:46.139 "w_mbytes_per_sec": 0 00:08:46.139 }, 00:08:46.139 "claimed": false, 00:08:46.139 "zoned": false, 00:08:46.139 "supported_io_types": { 00:08:46.139 "read": true, 00:08:46.139 "write": true, 00:08:46.139 "unmap": true, 00:08:46.139 "flush": true, 00:08:46.139 "reset": true, 00:08:46.139 "nvme_admin": false, 00:08:46.139 "nvme_io": false, 00:08:46.139 "nvme_io_md": false, 00:08:46.139 "write_zeroes": true, 00:08:46.139 "zcopy": false, 00:08:46.139 "get_zone_info": false, 00:08:46.139 "zone_management": false, 00:08:46.139 "zone_append": false, 00:08:46.139 "compare": false, 00:08:46.139 "compare_and_write": false, 00:08:46.139 "abort": false, 00:08:46.139 "seek_hole": false, 00:08:46.139 "seek_data": false, 00:08:46.139 "copy": false, 00:08:46.139 "nvme_iov_md": false 00:08:46.139 }, 00:08:46.139 "memory_domains": [ 00:08:46.139 { 00:08:46.139 "dma_device_id": "system", 00:08:46.139 "dma_device_type": 1 00:08:46.139 }, 00:08:46.139 { 00:08:46.139 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:46.139 "dma_device_type": 2 00:08:46.139 }, 00:08:46.139 { 00:08:46.139 "dma_device_id": "system", 00:08:46.139 "dma_device_type": 1 00:08:46.139 }, 00:08:46.139 { 00:08:46.139 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:46.139 "dma_device_type": 2 00:08:46.139 }, 00:08:46.139 { 00:08:46.139 "dma_device_id": "system", 00:08:46.139 "dma_device_type": 1 00:08:46.139 }, 00:08:46.139 { 00:08:46.139 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:46.139 "dma_device_type": 2 00:08:46.139 } 00:08:46.139 ], 00:08:46.139 "driver_specific": { 00:08:46.139 "raid": { 00:08:46.139 "uuid": "ed36e14d-53d5-4a8e-86ad-5b3e8296edb2", 00:08:46.139 "strip_size_kb": 64, 00:08:46.139 "state": "online", 00:08:46.139 "raid_level": "raid0", 00:08:46.139 "superblock": true, 00:08:46.139 "num_base_bdevs": 3, 00:08:46.139 "num_base_bdevs_discovered": 3, 00:08:46.139 "num_base_bdevs_operational": 3, 00:08:46.139 "base_bdevs_list": [ 00:08:46.139 { 00:08:46.139 "name": "pt1", 00:08:46.139 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:46.139 "is_configured": true, 00:08:46.139 "data_offset": 2048, 00:08:46.139 "data_size": 63488 00:08:46.139 }, 00:08:46.139 { 00:08:46.139 "name": "pt2", 00:08:46.139 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:46.139 "is_configured": true, 00:08:46.139 "data_offset": 2048, 00:08:46.139 "data_size": 63488 00:08:46.139 }, 00:08:46.139 { 00:08:46.139 "name": "pt3", 00:08:46.139 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:46.139 "is_configured": true, 00:08:46.139 "data_offset": 2048, 00:08:46.139 "data_size": 63488 00:08:46.139 } 00:08:46.139 ] 00:08:46.139 } 00:08:46.139 } 00:08:46.139 }' 00:08:46.139 15:16:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:46.139 15:16:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:46.139 pt2 00:08:46.139 pt3' 00:08:46.139 15:16:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:46.139 15:16:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:46.139 15:16:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:46.139 15:16:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:46.139 15:16:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:46.139 15:16:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.139 15:16:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.139 15:16:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.139 15:16:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:46.139 15:16:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:46.139 15:16:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:46.139 15:16:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:46.139 15:16:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:46.139 15:16:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.139 15:16:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.139 15:16:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.139 15:16:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:46.139 15:16:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:46.139 15:16:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:46.140 15:16:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:46.140 15:16:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:08:46.140 15:16:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.140 15:16:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.398 15:16:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.398 15:16:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:46.398 15:16:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:46.398 15:16:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:46.398 15:16:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.398 15:16:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:08:46.398 15:16:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.399 [2024-11-20 15:16:32.655293] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:46.399 15:16:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.399 15:16:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' ed36e14d-53d5-4a8e-86ad-5b3e8296edb2 '!=' ed36e14d-53d5-4a8e-86ad-5b3e8296edb2 ']' 00:08:46.399 15:16:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:08:46.399 15:16:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:46.399 15:16:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:46.399 15:16:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 64935 00:08:46.399 15:16:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 64935 ']' 00:08:46.399 15:16:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 64935 00:08:46.399 15:16:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:08:46.399 15:16:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:46.399 15:16:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64935 00:08:46.399 15:16:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:46.399 15:16:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:46.399 killing process with pid 64935 00:08:46.399 15:16:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64935' 00:08:46.399 15:16:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 64935 00:08:46.399 [2024-11-20 15:16:32.735968] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:46.399 [2024-11-20 15:16:32.736079] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:46.399 15:16:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 64935 00:08:46.399 [2024-11-20 15:16:32.736143] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:46.399 [2024-11-20 15:16:32.736159] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:46.658 [2024-11-20 15:16:33.058344] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:48.034 15:16:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:08:48.034 ************************************ 00:08:48.034 END TEST raid_superblock_test 00:08:48.034 ************************************ 00:08:48.034 00:08:48.034 real 0m5.096s 00:08:48.034 user 0m7.192s 00:08:48.034 sys 0m0.989s 00:08:48.034 15:16:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:48.034 15:16:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.034 15:16:34 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 3 read 00:08:48.034 15:16:34 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:48.034 15:16:34 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:48.034 15:16:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:48.034 ************************************ 00:08:48.034 START TEST raid_read_error_test 00:08:48.034 ************************************ 00:08:48.034 15:16:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 3 read 00:08:48.034 15:16:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:08:48.034 15:16:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:08:48.034 15:16:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:08:48.034 15:16:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:48.034 15:16:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:48.034 15:16:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:48.034 15:16:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:48.034 15:16:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:48.034 15:16:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:48.034 15:16:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:48.034 15:16:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:48.034 15:16:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:08:48.034 15:16:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:48.034 15:16:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:48.034 15:16:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:48.034 15:16:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:48.034 15:16:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:48.034 15:16:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:48.034 15:16:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:48.034 15:16:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:48.034 15:16:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:48.034 15:16:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:08:48.034 15:16:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:48.034 15:16:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:48.034 15:16:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:48.034 15:16:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.0zBPslNpnl 00:08:48.034 15:16:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=65183 00:08:48.034 15:16:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 65183 00:08:48.034 15:16:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:48.034 15:16:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 65183 ']' 00:08:48.034 15:16:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:48.034 15:16:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:48.034 15:16:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:48.034 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:48.034 15:16:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:48.034 15:16:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.034 [2024-11-20 15:16:34.418955] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:08:48.034 [2024-11-20 15:16:34.419082] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65183 ] 00:08:48.293 [2024-11-20 15:16:34.599352] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:48.293 [2024-11-20 15:16:34.716105] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:48.551 [2024-11-20 15:16:34.930763] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:48.551 [2024-11-20 15:16:34.930829] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:48.810 15:16:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:48.810 15:16:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:08:48.810 15:16:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:48.810 15:16:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:48.810 15:16:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.810 15:16:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.068 BaseBdev1_malloc 00:08:49.068 15:16:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.068 15:16:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:49.068 15:16:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.068 15:16:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.068 true 00:08:49.068 15:16:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.068 15:16:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:49.068 15:16:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.068 15:16:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.068 [2024-11-20 15:16:35.312411] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:49.068 [2024-11-20 15:16:35.312621] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:49.068 [2024-11-20 15:16:35.312705] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:49.068 [2024-11-20 15:16:35.312801] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:49.068 [2024-11-20 15:16:35.315287] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:49.068 [2024-11-20 15:16:35.315452] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:49.068 BaseBdev1 00:08:49.068 15:16:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.068 15:16:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:49.069 15:16:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:49.069 15:16:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.069 15:16:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.069 BaseBdev2_malloc 00:08:49.069 15:16:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.069 15:16:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:49.069 15:16:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.069 15:16:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.069 true 00:08:49.069 15:16:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.069 15:16:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:49.069 15:16:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.069 15:16:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.069 [2024-11-20 15:16:35.374310] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:49.069 [2024-11-20 15:16:35.374373] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:49.069 [2024-11-20 15:16:35.374394] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:49.069 [2024-11-20 15:16:35.374407] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:49.069 [2024-11-20 15:16:35.376782] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:49.069 [2024-11-20 15:16:35.376825] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:49.069 BaseBdev2 00:08:49.069 15:16:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.069 15:16:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:49.069 15:16:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:08:49.069 15:16:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.069 15:16:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.069 BaseBdev3_malloc 00:08:49.069 15:16:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.069 15:16:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:08:49.069 15:16:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.069 15:16:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.069 true 00:08:49.069 15:16:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.069 15:16:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:08:49.069 15:16:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.069 15:16:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.069 [2024-11-20 15:16:35.445140] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:08:49.069 [2024-11-20 15:16:35.445315] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:49.069 [2024-11-20 15:16:35.445362] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:08:49.069 [2024-11-20 15:16:35.445378] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:49.069 [2024-11-20 15:16:35.447989] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:49.069 [2024-11-20 15:16:35.448035] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:08:49.069 BaseBdev3 00:08:49.069 15:16:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.069 15:16:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:08:49.069 15:16:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.069 15:16:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.069 [2024-11-20 15:16:35.457206] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:49.069 [2024-11-20 15:16:35.459242] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:49.069 [2024-11-20 15:16:35.459480] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:49.069 [2024-11-20 15:16:35.459714] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:08:49.069 [2024-11-20 15:16:35.459733] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:49.069 [2024-11-20 15:16:35.460025] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:08:49.069 [2024-11-20 15:16:35.460192] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:08:49.069 [2024-11-20 15:16:35.460210] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:08:49.069 [2024-11-20 15:16:35.460376] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:49.069 15:16:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.069 15:16:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:49.069 15:16:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:49.069 15:16:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:49.069 15:16:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:49.069 15:16:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:49.069 15:16:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:49.069 15:16:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:49.069 15:16:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:49.069 15:16:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:49.069 15:16:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:49.069 15:16:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:49.069 15:16:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.069 15:16:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:49.069 15:16:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.069 15:16:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.069 15:16:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:49.069 "name": "raid_bdev1", 00:08:49.069 "uuid": "4056cfe1-d63b-44b9-8a2d-aeee8ff20c9c", 00:08:49.069 "strip_size_kb": 64, 00:08:49.069 "state": "online", 00:08:49.069 "raid_level": "raid0", 00:08:49.069 "superblock": true, 00:08:49.069 "num_base_bdevs": 3, 00:08:49.069 "num_base_bdevs_discovered": 3, 00:08:49.069 "num_base_bdevs_operational": 3, 00:08:49.069 "base_bdevs_list": [ 00:08:49.069 { 00:08:49.069 "name": "BaseBdev1", 00:08:49.069 "uuid": "61b408b2-4b72-5d0d-8bfe-b72a212ae2ba", 00:08:49.069 "is_configured": true, 00:08:49.069 "data_offset": 2048, 00:08:49.069 "data_size": 63488 00:08:49.069 }, 00:08:49.069 { 00:08:49.069 "name": "BaseBdev2", 00:08:49.069 "uuid": "f31ed54d-a6c4-5efc-a1e5-96e28b70492f", 00:08:49.069 "is_configured": true, 00:08:49.069 "data_offset": 2048, 00:08:49.069 "data_size": 63488 00:08:49.069 }, 00:08:49.069 { 00:08:49.069 "name": "BaseBdev3", 00:08:49.069 "uuid": "cbe96396-c96c-54db-9dd5-93b02feda8c4", 00:08:49.069 "is_configured": true, 00:08:49.069 "data_offset": 2048, 00:08:49.069 "data_size": 63488 00:08:49.069 } 00:08:49.069 ] 00:08:49.069 }' 00:08:49.069 15:16:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:49.069 15:16:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.635 15:16:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:49.635 15:16:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:49.635 [2024-11-20 15:16:35.993584] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:08:50.574 15:16:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:08:50.574 15:16:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.574 15:16:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.574 15:16:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.574 15:16:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:50.574 15:16:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:08:50.574 15:16:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:08:50.574 15:16:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:50.574 15:16:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:50.574 15:16:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:50.574 15:16:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:50.574 15:16:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:50.574 15:16:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:50.574 15:16:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:50.574 15:16:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:50.574 15:16:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:50.574 15:16:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:50.574 15:16:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:50.574 15:16:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:50.574 15:16:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.574 15:16:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.574 15:16:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.574 15:16:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:50.574 "name": "raid_bdev1", 00:08:50.574 "uuid": "4056cfe1-d63b-44b9-8a2d-aeee8ff20c9c", 00:08:50.574 "strip_size_kb": 64, 00:08:50.574 "state": "online", 00:08:50.574 "raid_level": "raid0", 00:08:50.574 "superblock": true, 00:08:50.574 "num_base_bdevs": 3, 00:08:50.574 "num_base_bdevs_discovered": 3, 00:08:50.574 "num_base_bdevs_operational": 3, 00:08:50.574 "base_bdevs_list": [ 00:08:50.574 { 00:08:50.574 "name": "BaseBdev1", 00:08:50.574 "uuid": "61b408b2-4b72-5d0d-8bfe-b72a212ae2ba", 00:08:50.574 "is_configured": true, 00:08:50.574 "data_offset": 2048, 00:08:50.574 "data_size": 63488 00:08:50.574 }, 00:08:50.574 { 00:08:50.575 "name": "BaseBdev2", 00:08:50.575 "uuid": "f31ed54d-a6c4-5efc-a1e5-96e28b70492f", 00:08:50.575 "is_configured": true, 00:08:50.575 "data_offset": 2048, 00:08:50.575 "data_size": 63488 00:08:50.575 }, 00:08:50.575 { 00:08:50.575 "name": "BaseBdev3", 00:08:50.575 "uuid": "cbe96396-c96c-54db-9dd5-93b02feda8c4", 00:08:50.575 "is_configured": true, 00:08:50.575 "data_offset": 2048, 00:08:50.575 "data_size": 63488 00:08:50.575 } 00:08:50.575 ] 00:08:50.575 }' 00:08:50.575 15:16:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:50.575 15:16:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.142 15:16:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:51.142 15:16:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.142 15:16:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.142 [2024-11-20 15:16:37.354167] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:51.142 [2024-11-20 15:16:37.355335] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:51.142 [2024-11-20 15:16:37.358044] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:51.142 [2024-11-20 15:16:37.358132] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:51.142 [2024-11-20 15:16:37.358176] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:51.142 [2024-11-20 15:16:37.358187] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:08:51.142 { 00:08:51.142 "results": [ 00:08:51.142 { 00:08:51.142 "job": "raid_bdev1", 00:08:51.142 "core_mask": "0x1", 00:08:51.142 "workload": "randrw", 00:08:51.142 "percentage": 50, 00:08:51.142 "status": "finished", 00:08:51.142 "queue_depth": 1, 00:08:51.142 "io_size": 131072, 00:08:51.142 "runtime": 1.361924, 00:08:51.142 "iops": 16180.051162913644, 00:08:51.142 "mibps": 2022.5063953642054, 00:08:51.142 "io_failed": 1, 00:08:51.142 "io_timeout": 0, 00:08:51.142 "avg_latency_us": 85.30083887758686, 00:08:51.142 "min_latency_us": 26.936546184738955, 00:08:51.142 "max_latency_us": 1460.7421686746989 00:08:51.142 } 00:08:51.142 ], 00:08:51.142 "core_count": 1 00:08:51.142 } 00:08:51.142 15:16:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.142 15:16:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 65183 00:08:51.142 15:16:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 65183 ']' 00:08:51.142 15:16:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 65183 00:08:51.142 15:16:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:08:51.142 15:16:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:51.142 15:16:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65183 00:08:51.142 killing process with pid 65183 00:08:51.142 15:16:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:51.142 15:16:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:51.142 15:16:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65183' 00:08:51.142 15:16:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 65183 00:08:51.142 [2024-11-20 15:16:37.412095] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:51.142 15:16:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 65183 00:08:51.401 [2024-11-20 15:16:37.639936] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:52.370 15:16:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:52.370 15:16:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.0zBPslNpnl 00:08:52.370 15:16:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:52.370 15:16:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:08:52.370 15:16:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:08:52.370 ************************************ 00:08:52.370 END TEST raid_read_error_test 00:08:52.370 ************************************ 00:08:52.370 15:16:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:52.370 15:16:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:52.370 15:16:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:08:52.370 00:08:52.370 real 0m4.534s 00:08:52.370 user 0m5.365s 00:08:52.370 sys 0m0.591s 00:08:52.370 15:16:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:52.370 15:16:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.629 15:16:38 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 3 write 00:08:52.629 15:16:38 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:52.629 15:16:38 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:52.629 15:16:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:52.629 ************************************ 00:08:52.629 START TEST raid_write_error_test 00:08:52.629 ************************************ 00:08:52.629 15:16:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 3 write 00:08:52.629 15:16:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:08:52.629 15:16:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:08:52.629 15:16:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:08:52.629 15:16:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:52.629 15:16:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:52.629 15:16:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:52.629 15:16:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:52.629 15:16:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:52.629 15:16:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:52.629 15:16:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:52.629 15:16:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:52.629 15:16:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:08:52.629 15:16:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:52.629 15:16:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:52.629 15:16:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:52.629 15:16:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:52.629 15:16:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:52.629 15:16:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:52.629 15:16:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:52.629 15:16:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:52.629 15:16:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:52.629 15:16:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:08:52.629 15:16:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:52.629 15:16:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:52.629 15:16:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:52.629 15:16:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.lcm8Y1xXaA 00:08:52.629 15:16:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=65336 00:08:52.629 15:16:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:52.629 15:16:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 65336 00:08:52.629 15:16:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 65336 ']' 00:08:52.629 15:16:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:52.629 15:16:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:52.629 15:16:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:52.629 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:52.629 15:16:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:52.629 15:16:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.629 [2024-11-20 15:16:39.027263] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:08:52.629 [2024-11-20 15:16:39.027390] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65336 ] 00:08:52.888 [2024-11-20 15:16:39.205938] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:52.888 [2024-11-20 15:16:39.326176] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:53.147 [2024-11-20 15:16:39.530210] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:53.147 [2024-11-20 15:16:39.530431] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:53.716 15:16:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:53.716 15:16:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:08:53.716 15:16:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:53.716 15:16:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:53.716 15:16:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.716 15:16:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.716 BaseBdev1_malloc 00:08:53.716 15:16:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.716 15:16:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:53.716 15:16:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.716 15:16:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.716 true 00:08:53.716 15:16:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.716 15:16:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:53.716 15:16:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.716 15:16:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.716 [2024-11-20 15:16:39.966564] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:53.716 [2024-11-20 15:16:39.966623] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:53.716 [2024-11-20 15:16:39.966647] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:53.716 [2024-11-20 15:16:39.966677] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:53.716 [2024-11-20 15:16:39.969012] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:53.716 [2024-11-20 15:16:39.969168] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:53.716 BaseBdev1 00:08:53.716 15:16:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.716 15:16:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:53.716 15:16:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:53.716 15:16:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.716 15:16:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.716 BaseBdev2_malloc 00:08:53.716 15:16:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.716 15:16:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:53.716 15:16:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.716 15:16:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.716 true 00:08:53.717 15:16:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.717 15:16:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:53.717 15:16:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.717 15:16:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.717 [2024-11-20 15:16:40.031290] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:53.717 [2024-11-20 15:16:40.031477] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:53.717 [2024-11-20 15:16:40.031506] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:53.717 [2024-11-20 15:16:40.031521] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:53.717 [2024-11-20 15:16:40.033911] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:53.717 [2024-11-20 15:16:40.033952] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:53.717 BaseBdev2 00:08:53.717 15:16:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.717 15:16:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:53.717 15:16:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:08:53.717 15:16:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.717 15:16:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.717 BaseBdev3_malloc 00:08:53.717 15:16:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.717 15:16:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:08:53.717 15:16:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.717 15:16:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.717 true 00:08:53.717 15:16:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.717 15:16:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:08:53.717 15:16:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.717 15:16:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.717 [2024-11-20 15:16:40.108431] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:08:53.717 [2024-11-20 15:16:40.108596] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:53.717 [2024-11-20 15:16:40.108627] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:08:53.717 [2024-11-20 15:16:40.108641] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:53.717 [2024-11-20 15:16:40.110986] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:53.717 [2024-11-20 15:16:40.111027] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:08:53.717 BaseBdev3 00:08:53.717 15:16:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.717 15:16:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:08:53.717 15:16:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.717 15:16:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.717 [2024-11-20 15:16:40.120510] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:53.717 [2024-11-20 15:16:40.122534] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:53.717 [2024-11-20 15:16:40.122605] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:53.717 [2024-11-20 15:16:40.122796] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:08:53.717 [2024-11-20 15:16:40.122811] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:53.717 [2024-11-20 15:16:40.123071] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:08:53.717 [2024-11-20 15:16:40.123349] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:08:53.717 [2024-11-20 15:16:40.123373] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:08:53.717 [2024-11-20 15:16:40.123513] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:53.717 15:16:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.717 15:16:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:53.717 15:16:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:53.717 15:16:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:53.717 15:16:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:53.717 15:16:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:53.717 15:16:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:53.717 15:16:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:53.717 15:16:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:53.717 15:16:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:53.717 15:16:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:53.717 15:16:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:53.717 15:16:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:53.717 15:16:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.717 15:16:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.717 15:16:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.717 15:16:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:53.717 "name": "raid_bdev1", 00:08:53.717 "uuid": "89d463a2-d826-42d9-ba53-4c11f0b4aa7f", 00:08:53.717 "strip_size_kb": 64, 00:08:53.717 "state": "online", 00:08:53.717 "raid_level": "raid0", 00:08:53.717 "superblock": true, 00:08:53.717 "num_base_bdevs": 3, 00:08:53.717 "num_base_bdevs_discovered": 3, 00:08:53.717 "num_base_bdevs_operational": 3, 00:08:53.717 "base_bdevs_list": [ 00:08:53.717 { 00:08:53.717 "name": "BaseBdev1", 00:08:53.717 "uuid": "e9fd031e-237c-5091-8504-2e0efcc86293", 00:08:53.717 "is_configured": true, 00:08:53.717 "data_offset": 2048, 00:08:53.717 "data_size": 63488 00:08:53.717 }, 00:08:53.717 { 00:08:53.717 "name": "BaseBdev2", 00:08:53.717 "uuid": "87e7ae58-c1ad-5ffb-80e9-60f5aa01b55f", 00:08:53.717 "is_configured": true, 00:08:53.717 "data_offset": 2048, 00:08:53.717 "data_size": 63488 00:08:53.717 }, 00:08:53.717 { 00:08:53.717 "name": "BaseBdev3", 00:08:53.717 "uuid": "fce0ee48-8f1b-5281-869f-c1c9d65e9450", 00:08:53.717 "is_configured": true, 00:08:53.717 "data_offset": 2048, 00:08:53.717 "data_size": 63488 00:08:53.717 } 00:08:53.717 ] 00:08:53.717 }' 00:08:53.717 15:16:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:53.717 15:16:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.285 15:16:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:54.285 15:16:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:54.285 [2024-11-20 15:16:40.625217] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:08:55.223 15:16:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:08:55.223 15:16:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.223 15:16:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.223 15:16:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.223 15:16:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:55.223 15:16:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:08:55.223 15:16:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:08:55.223 15:16:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:55.223 15:16:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:55.223 15:16:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:55.223 15:16:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:55.223 15:16:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:55.223 15:16:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:55.223 15:16:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:55.223 15:16:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:55.223 15:16:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:55.223 15:16:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:55.223 15:16:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:55.223 15:16:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:55.223 15:16:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.223 15:16:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.223 15:16:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.223 15:16:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:55.223 "name": "raid_bdev1", 00:08:55.223 "uuid": "89d463a2-d826-42d9-ba53-4c11f0b4aa7f", 00:08:55.223 "strip_size_kb": 64, 00:08:55.223 "state": "online", 00:08:55.223 "raid_level": "raid0", 00:08:55.224 "superblock": true, 00:08:55.224 "num_base_bdevs": 3, 00:08:55.224 "num_base_bdevs_discovered": 3, 00:08:55.224 "num_base_bdevs_operational": 3, 00:08:55.224 "base_bdevs_list": [ 00:08:55.224 { 00:08:55.224 "name": "BaseBdev1", 00:08:55.224 "uuid": "e9fd031e-237c-5091-8504-2e0efcc86293", 00:08:55.224 "is_configured": true, 00:08:55.224 "data_offset": 2048, 00:08:55.224 "data_size": 63488 00:08:55.224 }, 00:08:55.224 { 00:08:55.224 "name": "BaseBdev2", 00:08:55.224 "uuid": "87e7ae58-c1ad-5ffb-80e9-60f5aa01b55f", 00:08:55.224 "is_configured": true, 00:08:55.224 "data_offset": 2048, 00:08:55.224 "data_size": 63488 00:08:55.224 }, 00:08:55.224 { 00:08:55.224 "name": "BaseBdev3", 00:08:55.224 "uuid": "fce0ee48-8f1b-5281-869f-c1c9d65e9450", 00:08:55.224 "is_configured": true, 00:08:55.224 "data_offset": 2048, 00:08:55.224 "data_size": 63488 00:08:55.224 } 00:08:55.224 ] 00:08:55.224 }' 00:08:55.224 15:16:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:55.224 15:16:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.803 15:16:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:55.803 15:16:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.803 15:16:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.803 [2024-11-20 15:16:41.984038] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:55.803 [2024-11-20 15:16:41.984067] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:55.803 [2024-11-20 15:16:41.986727] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:55.803 [2024-11-20 15:16:41.986771] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:55.803 [2024-11-20 15:16:41.986811] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:55.803 [2024-11-20 15:16:41.986823] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:08:55.803 { 00:08:55.803 "results": [ 00:08:55.803 { 00:08:55.803 "job": "raid_bdev1", 00:08:55.803 "core_mask": "0x1", 00:08:55.803 "workload": "randrw", 00:08:55.803 "percentage": 50, 00:08:55.803 "status": "finished", 00:08:55.803 "queue_depth": 1, 00:08:55.803 "io_size": 131072, 00:08:55.803 "runtime": 1.358669, 00:08:55.803 "iops": 16135.644516802842, 00:08:55.803 "mibps": 2016.9555646003553, 00:08:55.803 "io_failed": 1, 00:08:55.803 "io_timeout": 0, 00:08:55.803 "avg_latency_us": 85.508099319372, 00:08:55.803 "min_latency_us": 26.936546184738955, 00:08:55.803 "max_latency_us": 1441.0024096385541 00:08:55.803 } 00:08:55.803 ], 00:08:55.803 "core_count": 1 00:08:55.803 } 00:08:55.803 15:16:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.803 15:16:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 65336 00:08:55.803 15:16:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 65336 ']' 00:08:55.803 15:16:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 65336 00:08:55.803 15:16:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:08:55.803 15:16:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:55.803 15:16:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65336 00:08:55.803 15:16:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:55.803 15:16:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:55.803 killing process with pid 65336 00:08:55.803 15:16:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65336' 00:08:55.803 15:16:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 65336 00:08:55.803 [2024-11-20 15:16:42.038207] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:55.803 15:16:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 65336 00:08:55.803 [2024-11-20 15:16:42.273596] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:57.178 15:16:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:57.178 15:16:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.lcm8Y1xXaA 00:08:57.178 15:16:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:57.178 15:16:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:08:57.178 15:16:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:08:57.178 15:16:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:57.178 15:16:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:57.178 15:16:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:08:57.178 00:08:57.178 real 0m4.570s 00:08:57.178 user 0m5.401s 00:08:57.178 sys 0m0.616s 00:08:57.178 ************************************ 00:08:57.178 END TEST raid_write_error_test 00:08:57.178 ************************************ 00:08:57.178 15:16:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:57.178 15:16:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.178 15:16:43 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:08:57.178 15:16:43 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:08:57.178 15:16:43 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:57.178 15:16:43 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:57.178 15:16:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:57.178 ************************************ 00:08:57.178 START TEST raid_state_function_test 00:08:57.178 ************************************ 00:08:57.178 15:16:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 3 false 00:08:57.178 15:16:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:08:57.178 15:16:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:08:57.179 15:16:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:08:57.179 15:16:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:57.179 15:16:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:57.179 15:16:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:57.179 15:16:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:57.179 15:16:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:57.179 15:16:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:57.179 15:16:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:57.179 15:16:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:57.179 15:16:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:57.179 15:16:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:57.179 15:16:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:57.179 15:16:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:57.179 15:16:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:57.179 15:16:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:57.179 15:16:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:57.179 15:16:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:57.179 15:16:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:57.179 15:16:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:57.179 15:16:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:08:57.179 15:16:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:57.179 15:16:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:57.179 15:16:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:08:57.179 15:16:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:08:57.179 15:16:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=65474 00:08:57.179 15:16:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:57.179 Process raid pid: 65474 00:08:57.179 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:57.179 15:16:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 65474' 00:08:57.179 15:16:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 65474 00:08:57.179 15:16:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 65474 ']' 00:08:57.179 15:16:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:57.179 15:16:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:57.179 15:16:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:57.179 15:16:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:57.179 15:16:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.438 [2024-11-20 15:16:43.666506] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:08:57.438 [2024-11-20 15:16:43.666928] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:57.438 [2024-11-20 15:16:43.841709] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:57.696 [2024-11-20 15:16:43.972444] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:57.954 [2024-11-20 15:16:44.181178] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:57.954 [2024-11-20 15:16:44.181422] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:58.212 15:16:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:58.212 15:16:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:08:58.212 15:16:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:58.212 15:16:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.212 15:16:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.212 [2024-11-20 15:16:44.540970] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:58.212 [2024-11-20 15:16:44.541030] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:58.212 [2024-11-20 15:16:44.541042] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:58.212 [2024-11-20 15:16:44.541055] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:58.212 [2024-11-20 15:16:44.541063] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:58.212 [2024-11-20 15:16:44.541074] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:58.212 15:16:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.212 15:16:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:58.212 15:16:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:58.212 15:16:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:58.212 15:16:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:58.212 15:16:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:58.212 15:16:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:58.212 15:16:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:58.212 15:16:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:58.212 15:16:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:58.212 15:16:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:58.212 15:16:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:58.212 15:16:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.212 15:16:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.212 15:16:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:58.212 15:16:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.212 15:16:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:58.212 "name": "Existed_Raid", 00:08:58.212 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:58.212 "strip_size_kb": 64, 00:08:58.212 "state": "configuring", 00:08:58.212 "raid_level": "concat", 00:08:58.212 "superblock": false, 00:08:58.212 "num_base_bdevs": 3, 00:08:58.213 "num_base_bdevs_discovered": 0, 00:08:58.213 "num_base_bdevs_operational": 3, 00:08:58.213 "base_bdevs_list": [ 00:08:58.213 { 00:08:58.213 "name": "BaseBdev1", 00:08:58.213 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:58.213 "is_configured": false, 00:08:58.213 "data_offset": 0, 00:08:58.213 "data_size": 0 00:08:58.213 }, 00:08:58.213 { 00:08:58.213 "name": "BaseBdev2", 00:08:58.213 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:58.213 "is_configured": false, 00:08:58.213 "data_offset": 0, 00:08:58.213 "data_size": 0 00:08:58.213 }, 00:08:58.213 { 00:08:58.213 "name": "BaseBdev3", 00:08:58.213 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:58.213 "is_configured": false, 00:08:58.213 "data_offset": 0, 00:08:58.213 "data_size": 0 00:08:58.213 } 00:08:58.213 ] 00:08:58.213 }' 00:08:58.213 15:16:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:58.213 15:16:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.471 15:16:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:58.471 15:16:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.471 15:16:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.471 [2024-11-20 15:16:44.948367] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:58.471 [2024-11-20 15:16:44.948554] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:58.729 15:16:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.729 15:16:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:58.729 15:16:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.729 15:16:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.729 [2024-11-20 15:16:44.956358] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:58.729 [2024-11-20 15:16:44.956408] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:58.729 [2024-11-20 15:16:44.956419] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:58.729 [2024-11-20 15:16:44.956432] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:58.729 [2024-11-20 15:16:44.956440] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:58.729 [2024-11-20 15:16:44.956452] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:58.729 15:16:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.729 15:16:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:58.729 15:16:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.729 15:16:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.729 [2024-11-20 15:16:45.002713] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:58.729 BaseBdev1 00:08:58.729 15:16:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.729 15:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:58.729 15:16:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:58.729 15:16:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:58.729 15:16:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:58.729 15:16:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:58.729 15:16:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:58.729 15:16:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:58.729 15:16:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.729 15:16:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.729 15:16:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.729 15:16:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:58.729 15:16:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.729 15:16:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.729 [ 00:08:58.729 { 00:08:58.729 "name": "BaseBdev1", 00:08:58.729 "aliases": [ 00:08:58.729 "860098ca-09fe-4f08-8a70-bfcba1160e95" 00:08:58.729 ], 00:08:58.729 "product_name": "Malloc disk", 00:08:58.729 "block_size": 512, 00:08:58.729 "num_blocks": 65536, 00:08:58.729 "uuid": "860098ca-09fe-4f08-8a70-bfcba1160e95", 00:08:58.729 "assigned_rate_limits": { 00:08:58.729 "rw_ios_per_sec": 0, 00:08:58.729 "rw_mbytes_per_sec": 0, 00:08:58.729 "r_mbytes_per_sec": 0, 00:08:58.729 "w_mbytes_per_sec": 0 00:08:58.729 }, 00:08:58.729 "claimed": true, 00:08:58.729 "claim_type": "exclusive_write", 00:08:58.729 "zoned": false, 00:08:58.729 "supported_io_types": { 00:08:58.729 "read": true, 00:08:58.729 "write": true, 00:08:58.729 "unmap": true, 00:08:58.729 "flush": true, 00:08:58.729 "reset": true, 00:08:58.729 "nvme_admin": false, 00:08:58.729 "nvme_io": false, 00:08:58.729 "nvme_io_md": false, 00:08:58.729 "write_zeroes": true, 00:08:58.729 "zcopy": true, 00:08:58.729 "get_zone_info": false, 00:08:58.729 "zone_management": false, 00:08:58.729 "zone_append": false, 00:08:58.729 "compare": false, 00:08:58.729 "compare_and_write": false, 00:08:58.729 "abort": true, 00:08:58.729 "seek_hole": false, 00:08:58.729 "seek_data": false, 00:08:58.729 "copy": true, 00:08:58.729 "nvme_iov_md": false 00:08:58.729 }, 00:08:58.729 "memory_domains": [ 00:08:58.729 { 00:08:58.729 "dma_device_id": "system", 00:08:58.729 "dma_device_type": 1 00:08:58.729 }, 00:08:58.729 { 00:08:58.729 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:58.729 "dma_device_type": 2 00:08:58.729 } 00:08:58.729 ], 00:08:58.729 "driver_specific": {} 00:08:58.729 } 00:08:58.729 ] 00:08:58.729 15:16:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.729 15:16:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:58.729 15:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:58.729 15:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:58.729 15:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:58.729 15:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:58.729 15:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:58.729 15:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:58.729 15:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:58.729 15:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:58.729 15:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:58.729 15:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:58.729 15:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:58.729 15:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:58.729 15:16:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.729 15:16:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.729 15:16:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.729 15:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:58.729 "name": "Existed_Raid", 00:08:58.729 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:58.729 "strip_size_kb": 64, 00:08:58.729 "state": "configuring", 00:08:58.729 "raid_level": "concat", 00:08:58.729 "superblock": false, 00:08:58.729 "num_base_bdevs": 3, 00:08:58.729 "num_base_bdevs_discovered": 1, 00:08:58.729 "num_base_bdevs_operational": 3, 00:08:58.729 "base_bdevs_list": [ 00:08:58.729 { 00:08:58.729 "name": "BaseBdev1", 00:08:58.729 "uuid": "860098ca-09fe-4f08-8a70-bfcba1160e95", 00:08:58.729 "is_configured": true, 00:08:58.729 "data_offset": 0, 00:08:58.729 "data_size": 65536 00:08:58.729 }, 00:08:58.729 { 00:08:58.729 "name": "BaseBdev2", 00:08:58.729 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:58.729 "is_configured": false, 00:08:58.730 "data_offset": 0, 00:08:58.730 "data_size": 0 00:08:58.730 }, 00:08:58.730 { 00:08:58.730 "name": "BaseBdev3", 00:08:58.730 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:58.730 "is_configured": false, 00:08:58.730 "data_offset": 0, 00:08:58.730 "data_size": 0 00:08:58.730 } 00:08:58.730 ] 00:08:58.730 }' 00:08:58.730 15:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:58.730 15:16:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.987 15:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:58.987 15:16:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.987 15:16:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.245 [2024-11-20 15:16:45.470084] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:59.245 [2024-11-20 15:16:45.470137] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:59.245 15:16:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.245 15:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:59.245 15:16:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.245 15:16:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.245 [2024-11-20 15:16:45.482133] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:59.245 [2024-11-20 15:16:45.484433] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:59.245 [2024-11-20 15:16:45.484616] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:59.245 [2024-11-20 15:16:45.484739] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:59.245 [2024-11-20 15:16:45.484859] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:59.245 15:16:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.245 15:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:59.245 15:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:59.245 15:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:59.245 15:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:59.245 15:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:59.245 15:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:59.245 15:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:59.245 15:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:59.245 15:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:59.245 15:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:59.245 15:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:59.245 15:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:59.245 15:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:59.245 15:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:59.245 15:16:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.245 15:16:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.246 15:16:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.246 15:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:59.246 "name": "Existed_Raid", 00:08:59.246 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:59.246 "strip_size_kb": 64, 00:08:59.246 "state": "configuring", 00:08:59.246 "raid_level": "concat", 00:08:59.246 "superblock": false, 00:08:59.246 "num_base_bdevs": 3, 00:08:59.246 "num_base_bdevs_discovered": 1, 00:08:59.246 "num_base_bdevs_operational": 3, 00:08:59.246 "base_bdevs_list": [ 00:08:59.246 { 00:08:59.246 "name": "BaseBdev1", 00:08:59.246 "uuid": "860098ca-09fe-4f08-8a70-bfcba1160e95", 00:08:59.246 "is_configured": true, 00:08:59.246 "data_offset": 0, 00:08:59.246 "data_size": 65536 00:08:59.246 }, 00:08:59.246 { 00:08:59.246 "name": "BaseBdev2", 00:08:59.246 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:59.246 "is_configured": false, 00:08:59.246 "data_offset": 0, 00:08:59.246 "data_size": 0 00:08:59.246 }, 00:08:59.246 { 00:08:59.246 "name": "BaseBdev3", 00:08:59.246 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:59.246 "is_configured": false, 00:08:59.246 "data_offset": 0, 00:08:59.246 "data_size": 0 00:08:59.246 } 00:08:59.246 ] 00:08:59.246 }' 00:08:59.246 15:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:59.246 15:16:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.504 15:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:59.504 15:16:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.504 15:16:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.504 [2024-11-20 15:16:45.947937] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:59.504 BaseBdev2 00:08:59.504 15:16:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.504 15:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:59.504 15:16:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:59.504 15:16:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:59.504 15:16:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:59.504 15:16:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:59.504 15:16:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:59.504 15:16:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:59.504 15:16:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.504 15:16:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.504 15:16:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.504 15:16:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:59.504 15:16:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.504 15:16:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.504 [ 00:08:59.504 { 00:08:59.504 "name": "BaseBdev2", 00:08:59.504 "aliases": [ 00:08:59.504 "27aab962-41a9-4a41-b78c-5adc26d3086a" 00:08:59.504 ], 00:08:59.504 "product_name": "Malloc disk", 00:08:59.505 "block_size": 512, 00:08:59.505 "num_blocks": 65536, 00:08:59.505 "uuid": "27aab962-41a9-4a41-b78c-5adc26d3086a", 00:08:59.505 "assigned_rate_limits": { 00:08:59.505 "rw_ios_per_sec": 0, 00:08:59.505 "rw_mbytes_per_sec": 0, 00:08:59.505 "r_mbytes_per_sec": 0, 00:08:59.505 "w_mbytes_per_sec": 0 00:08:59.505 }, 00:08:59.505 "claimed": true, 00:08:59.505 "claim_type": "exclusive_write", 00:08:59.505 "zoned": false, 00:08:59.505 "supported_io_types": { 00:08:59.505 "read": true, 00:08:59.505 "write": true, 00:08:59.505 "unmap": true, 00:08:59.505 "flush": true, 00:08:59.505 "reset": true, 00:08:59.505 "nvme_admin": false, 00:08:59.505 "nvme_io": false, 00:08:59.505 "nvme_io_md": false, 00:08:59.834 "write_zeroes": true, 00:08:59.834 "zcopy": true, 00:08:59.834 "get_zone_info": false, 00:08:59.834 "zone_management": false, 00:08:59.834 "zone_append": false, 00:08:59.834 "compare": false, 00:08:59.834 "compare_and_write": false, 00:08:59.834 "abort": true, 00:08:59.834 "seek_hole": false, 00:08:59.834 "seek_data": false, 00:08:59.834 "copy": true, 00:08:59.834 "nvme_iov_md": false 00:08:59.834 }, 00:08:59.834 "memory_domains": [ 00:08:59.834 { 00:08:59.834 "dma_device_id": "system", 00:08:59.834 "dma_device_type": 1 00:08:59.834 }, 00:08:59.834 { 00:08:59.834 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:59.834 "dma_device_type": 2 00:08:59.834 } 00:08:59.834 ], 00:08:59.834 "driver_specific": {} 00:08:59.834 } 00:08:59.834 ] 00:08:59.834 15:16:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.834 15:16:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:59.835 15:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:59.835 15:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:59.835 15:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:59.835 15:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:59.835 15:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:59.835 15:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:59.835 15:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:59.835 15:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:59.835 15:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:59.835 15:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:59.835 15:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:59.835 15:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:59.835 15:16:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:59.835 15:16:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:59.835 15:16:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.835 15:16:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.835 15:16:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.835 15:16:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:59.835 "name": "Existed_Raid", 00:08:59.835 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:59.835 "strip_size_kb": 64, 00:08:59.835 "state": "configuring", 00:08:59.835 "raid_level": "concat", 00:08:59.835 "superblock": false, 00:08:59.835 "num_base_bdevs": 3, 00:08:59.835 "num_base_bdevs_discovered": 2, 00:08:59.835 "num_base_bdevs_operational": 3, 00:08:59.835 "base_bdevs_list": [ 00:08:59.835 { 00:08:59.835 "name": "BaseBdev1", 00:08:59.835 "uuid": "860098ca-09fe-4f08-8a70-bfcba1160e95", 00:08:59.835 "is_configured": true, 00:08:59.835 "data_offset": 0, 00:08:59.835 "data_size": 65536 00:08:59.835 }, 00:08:59.835 { 00:08:59.835 "name": "BaseBdev2", 00:08:59.835 "uuid": "27aab962-41a9-4a41-b78c-5adc26d3086a", 00:08:59.835 "is_configured": true, 00:08:59.835 "data_offset": 0, 00:08:59.835 "data_size": 65536 00:08:59.835 }, 00:08:59.835 { 00:08:59.835 "name": "BaseBdev3", 00:08:59.835 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:59.835 "is_configured": false, 00:08:59.835 "data_offset": 0, 00:08:59.835 "data_size": 0 00:08:59.835 } 00:08:59.835 ] 00:08:59.835 }' 00:08:59.835 15:16:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:59.835 15:16:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.094 15:16:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:00.094 15:16:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.094 15:16:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.094 [2024-11-20 15:16:46.487975] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:00.094 [2024-11-20 15:16:46.488021] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:00.094 [2024-11-20 15:16:46.488036] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:09:00.094 [2024-11-20 15:16:46.488310] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:00.094 [2024-11-20 15:16:46.488480] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:00.094 [2024-11-20 15:16:46.488491] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:00.094 [2024-11-20 15:16:46.488754] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:00.094 BaseBdev3 00:09:00.094 15:16:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.094 15:16:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:00.094 15:16:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:00.094 15:16:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:00.094 15:16:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:00.094 15:16:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:00.094 15:16:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:00.094 15:16:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:00.094 15:16:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.094 15:16:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.094 15:16:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.094 15:16:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:00.094 15:16:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.094 15:16:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.094 [ 00:09:00.094 { 00:09:00.094 "name": "BaseBdev3", 00:09:00.094 "aliases": [ 00:09:00.094 "483438bc-3d00-4a56-a200-7adfe5a00151" 00:09:00.094 ], 00:09:00.094 "product_name": "Malloc disk", 00:09:00.094 "block_size": 512, 00:09:00.094 "num_blocks": 65536, 00:09:00.094 "uuid": "483438bc-3d00-4a56-a200-7adfe5a00151", 00:09:00.094 "assigned_rate_limits": { 00:09:00.094 "rw_ios_per_sec": 0, 00:09:00.094 "rw_mbytes_per_sec": 0, 00:09:00.094 "r_mbytes_per_sec": 0, 00:09:00.094 "w_mbytes_per_sec": 0 00:09:00.094 }, 00:09:00.094 "claimed": true, 00:09:00.094 "claim_type": "exclusive_write", 00:09:00.094 "zoned": false, 00:09:00.094 "supported_io_types": { 00:09:00.094 "read": true, 00:09:00.094 "write": true, 00:09:00.094 "unmap": true, 00:09:00.094 "flush": true, 00:09:00.094 "reset": true, 00:09:00.094 "nvme_admin": false, 00:09:00.094 "nvme_io": false, 00:09:00.094 "nvme_io_md": false, 00:09:00.094 "write_zeroes": true, 00:09:00.094 "zcopy": true, 00:09:00.094 "get_zone_info": false, 00:09:00.094 "zone_management": false, 00:09:00.094 "zone_append": false, 00:09:00.094 "compare": false, 00:09:00.094 "compare_and_write": false, 00:09:00.095 "abort": true, 00:09:00.095 "seek_hole": false, 00:09:00.095 "seek_data": false, 00:09:00.095 "copy": true, 00:09:00.095 "nvme_iov_md": false 00:09:00.095 }, 00:09:00.095 "memory_domains": [ 00:09:00.095 { 00:09:00.095 "dma_device_id": "system", 00:09:00.095 "dma_device_type": 1 00:09:00.095 }, 00:09:00.095 { 00:09:00.095 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:00.095 "dma_device_type": 2 00:09:00.095 } 00:09:00.095 ], 00:09:00.095 "driver_specific": {} 00:09:00.095 } 00:09:00.095 ] 00:09:00.095 15:16:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.095 15:16:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:00.095 15:16:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:00.095 15:16:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:00.095 15:16:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:09:00.095 15:16:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:00.095 15:16:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:00.095 15:16:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:00.095 15:16:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:00.095 15:16:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:00.095 15:16:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:00.095 15:16:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:00.095 15:16:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:00.095 15:16:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:00.095 15:16:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:00.095 15:16:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:00.095 15:16:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.095 15:16:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.095 15:16:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.353 15:16:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:00.353 "name": "Existed_Raid", 00:09:00.353 "uuid": "0eccaf87-a58f-4f86-8449-a286faba342e", 00:09:00.353 "strip_size_kb": 64, 00:09:00.353 "state": "online", 00:09:00.353 "raid_level": "concat", 00:09:00.353 "superblock": false, 00:09:00.353 "num_base_bdevs": 3, 00:09:00.353 "num_base_bdevs_discovered": 3, 00:09:00.353 "num_base_bdevs_operational": 3, 00:09:00.353 "base_bdevs_list": [ 00:09:00.353 { 00:09:00.353 "name": "BaseBdev1", 00:09:00.353 "uuid": "860098ca-09fe-4f08-8a70-bfcba1160e95", 00:09:00.353 "is_configured": true, 00:09:00.353 "data_offset": 0, 00:09:00.353 "data_size": 65536 00:09:00.353 }, 00:09:00.353 { 00:09:00.353 "name": "BaseBdev2", 00:09:00.353 "uuid": "27aab962-41a9-4a41-b78c-5adc26d3086a", 00:09:00.353 "is_configured": true, 00:09:00.353 "data_offset": 0, 00:09:00.353 "data_size": 65536 00:09:00.353 }, 00:09:00.353 { 00:09:00.353 "name": "BaseBdev3", 00:09:00.353 "uuid": "483438bc-3d00-4a56-a200-7adfe5a00151", 00:09:00.353 "is_configured": true, 00:09:00.353 "data_offset": 0, 00:09:00.353 "data_size": 65536 00:09:00.353 } 00:09:00.353 ] 00:09:00.353 }' 00:09:00.353 15:16:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:00.353 15:16:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.612 15:16:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:00.612 15:16:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:00.612 15:16:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:00.612 15:16:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:00.612 15:16:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:00.612 15:16:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:00.612 15:16:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:00.612 15:16:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:00.612 15:16:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.612 15:16:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.612 [2024-11-20 15:16:46.947726] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:00.612 15:16:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.612 15:16:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:00.612 "name": "Existed_Raid", 00:09:00.612 "aliases": [ 00:09:00.612 "0eccaf87-a58f-4f86-8449-a286faba342e" 00:09:00.612 ], 00:09:00.612 "product_name": "Raid Volume", 00:09:00.612 "block_size": 512, 00:09:00.612 "num_blocks": 196608, 00:09:00.612 "uuid": "0eccaf87-a58f-4f86-8449-a286faba342e", 00:09:00.612 "assigned_rate_limits": { 00:09:00.612 "rw_ios_per_sec": 0, 00:09:00.612 "rw_mbytes_per_sec": 0, 00:09:00.612 "r_mbytes_per_sec": 0, 00:09:00.612 "w_mbytes_per_sec": 0 00:09:00.612 }, 00:09:00.612 "claimed": false, 00:09:00.612 "zoned": false, 00:09:00.612 "supported_io_types": { 00:09:00.612 "read": true, 00:09:00.612 "write": true, 00:09:00.612 "unmap": true, 00:09:00.612 "flush": true, 00:09:00.612 "reset": true, 00:09:00.612 "nvme_admin": false, 00:09:00.612 "nvme_io": false, 00:09:00.612 "nvme_io_md": false, 00:09:00.612 "write_zeroes": true, 00:09:00.612 "zcopy": false, 00:09:00.612 "get_zone_info": false, 00:09:00.612 "zone_management": false, 00:09:00.612 "zone_append": false, 00:09:00.612 "compare": false, 00:09:00.612 "compare_and_write": false, 00:09:00.612 "abort": false, 00:09:00.612 "seek_hole": false, 00:09:00.612 "seek_data": false, 00:09:00.612 "copy": false, 00:09:00.612 "nvme_iov_md": false 00:09:00.612 }, 00:09:00.612 "memory_domains": [ 00:09:00.612 { 00:09:00.612 "dma_device_id": "system", 00:09:00.612 "dma_device_type": 1 00:09:00.612 }, 00:09:00.612 { 00:09:00.612 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:00.612 "dma_device_type": 2 00:09:00.612 }, 00:09:00.612 { 00:09:00.612 "dma_device_id": "system", 00:09:00.612 "dma_device_type": 1 00:09:00.612 }, 00:09:00.612 { 00:09:00.612 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:00.612 "dma_device_type": 2 00:09:00.612 }, 00:09:00.612 { 00:09:00.612 "dma_device_id": "system", 00:09:00.612 "dma_device_type": 1 00:09:00.612 }, 00:09:00.612 { 00:09:00.612 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:00.612 "dma_device_type": 2 00:09:00.612 } 00:09:00.612 ], 00:09:00.612 "driver_specific": { 00:09:00.612 "raid": { 00:09:00.612 "uuid": "0eccaf87-a58f-4f86-8449-a286faba342e", 00:09:00.612 "strip_size_kb": 64, 00:09:00.612 "state": "online", 00:09:00.612 "raid_level": "concat", 00:09:00.612 "superblock": false, 00:09:00.612 "num_base_bdevs": 3, 00:09:00.612 "num_base_bdevs_discovered": 3, 00:09:00.612 "num_base_bdevs_operational": 3, 00:09:00.612 "base_bdevs_list": [ 00:09:00.612 { 00:09:00.612 "name": "BaseBdev1", 00:09:00.612 "uuid": "860098ca-09fe-4f08-8a70-bfcba1160e95", 00:09:00.613 "is_configured": true, 00:09:00.613 "data_offset": 0, 00:09:00.613 "data_size": 65536 00:09:00.613 }, 00:09:00.613 { 00:09:00.613 "name": "BaseBdev2", 00:09:00.613 "uuid": "27aab962-41a9-4a41-b78c-5adc26d3086a", 00:09:00.613 "is_configured": true, 00:09:00.613 "data_offset": 0, 00:09:00.613 "data_size": 65536 00:09:00.613 }, 00:09:00.613 { 00:09:00.613 "name": "BaseBdev3", 00:09:00.613 "uuid": "483438bc-3d00-4a56-a200-7adfe5a00151", 00:09:00.613 "is_configured": true, 00:09:00.613 "data_offset": 0, 00:09:00.613 "data_size": 65536 00:09:00.613 } 00:09:00.613 ] 00:09:00.613 } 00:09:00.613 } 00:09:00.613 }' 00:09:00.613 15:16:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:00.613 15:16:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:00.613 BaseBdev2 00:09:00.613 BaseBdev3' 00:09:00.613 15:16:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:00.613 15:16:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:00.613 15:16:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:00.613 15:16:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:00.613 15:16:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.613 15:16:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.613 15:16:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:00.613 15:16:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.872 15:16:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:00.872 15:16:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:00.872 15:16:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:00.872 15:16:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:00.872 15:16:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:00.872 15:16:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.872 15:16:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.872 15:16:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.872 15:16:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:00.872 15:16:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:00.872 15:16:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:00.872 15:16:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:00.872 15:16:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:00.872 15:16:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.872 15:16:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.872 15:16:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.872 15:16:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:00.872 15:16:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:00.872 15:16:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:00.872 15:16:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.872 15:16:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.872 [2024-11-20 15:16:47.203302] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:00.872 [2024-11-20 15:16:47.203462] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:00.872 [2024-11-20 15:16:47.203544] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:00.872 15:16:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.872 15:16:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:00.872 15:16:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:09:00.872 15:16:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:00.872 15:16:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:00.872 15:16:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:00.872 15:16:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:09:00.872 15:16:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:00.872 15:16:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:00.872 15:16:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:00.872 15:16:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:00.872 15:16:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:00.872 15:16:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:00.872 15:16:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:00.872 15:16:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:00.872 15:16:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:00.872 15:16:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:00.872 15:16:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.872 15:16:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.872 15:16:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:00.872 15:16:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.130 15:16:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:01.130 "name": "Existed_Raid", 00:09:01.130 "uuid": "0eccaf87-a58f-4f86-8449-a286faba342e", 00:09:01.130 "strip_size_kb": 64, 00:09:01.130 "state": "offline", 00:09:01.130 "raid_level": "concat", 00:09:01.130 "superblock": false, 00:09:01.130 "num_base_bdevs": 3, 00:09:01.130 "num_base_bdevs_discovered": 2, 00:09:01.130 "num_base_bdevs_operational": 2, 00:09:01.130 "base_bdevs_list": [ 00:09:01.130 { 00:09:01.130 "name": null, 00:09:01.130 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:01.130 "is_configured": false, 00:09:01.130 "data_offset": 0, 00:09:01.130 "data_size": 65536 00:09:01.130 }, 00:09:01.130 { 00:09:01.130 "name": "BaseBdev2", 00:09:01.130 "uuid": "27aab962-41a9-4a41-b78c-5adc26d3086a", 00:09:01.130 "is_configured": true, 00:09:01.130 "data_offset": 0, 00:09:01.130 "data_size": 65536 00:09:01.130 }, 00:09:01.130 { 00:09:01.130 "name": "BaseBdev3", 00:09:01.130 "uuid": "483438bc-3d00-4a56-a200-7adfe5a00151", 00:09:01.130 "is_configured": true, 00:09:01.130 "data_offset": 0, 00:09:01.130 "data_size": 65536 00:09:01.130 } 00:09:01.130 ] 00:09:01.130 }' 00:09:01.130 15:16:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:01.130 15:16:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.388 15:16:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:01.388 15:16:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:01.389 15:16:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:01.389 15:16:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:01.389 15:16:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.389 15:16:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.389 15:16:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.389 15:16:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:01.389 15:16:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:01.389 15:16:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:01.389 15:16:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.389 15:16:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.389 [2024-11-20 15:16:47.724916] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:01.389 15:16:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.389 15:16:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:01.389 15:16:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:01.389 15:16:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:01.389 15:16:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.389 15:16:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.389 15:16:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:01.389 15:16:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.647 15:16:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:01.648 15:16:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:01.648 15:16:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:01.648 15:16:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.648 15:16:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.648 [2024-11-20 15:16:47.876749] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:01.648 [2024-11-20 15:16:47.876800] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:01.648 15:16:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.648 15:16:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:01.648 15:16:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:01.648 15:16:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:01.648 15:16:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:01.648 15:16:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.648 15:16:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.648 15:16:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.648 15:16:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:01.648 15:16:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:01.648 15:16:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:09:01.648 15:16:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:01.648 15:16:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:01.648 15:16:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:01.648 15:16:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.648 15:16:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.648 BaseBdev2 00:09:01.648 15:16:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.648 15:16:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:01.648 15:16:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:01.648 15:16:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:01.648 15:16:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:01.648 15:16:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:01.648 15:16:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:01.648 15:16:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:01.648 15:16:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.648 15:16:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.648 15:16:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.648 15:16:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:01.648 15:16:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.648 15:16:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.648 [ 00:09:01.648 { 00:09:01.648 "name": "BaseBdev2", 00:09:01.648 "aliases": [ 00:09:01.648 "c22dfaaf-b353-448c-b3da-711239b1ee7e" 00:09:01.648 ], 00:09:01.648 "product_name": "Malloc disk", 00:09:01.648 "block_size": 512, 00:09:01.648 "num_blocks": 65536, 00:09:01.648 "uuid": "c22dfaaf-b353-448c-b3da-711239b1ee7e", 00:09:01.648 "assigned_rate_limits": { 00:09:01.648 "rw_ios_per_sec": 0, 00:09:01.648 "rw_mbytes_per_sec": 0, 00:09:01.648 "r_mbytes_per_sec": 0, 00:09:01.648 "w_mbytes_per_sec": 0 00:09:01.648 }, 00:09:01.648 "claimed": false, 00:09:01.648 "zoned": false, 00:09:01.648 "supported_io_types": { 00:09:01.648 "read": true, 00:09:01.648 "write": true, 00:09:01.648 "unmap": true, 00:09:01.648 "flush": true, 00:09:01.648 "reset": true, 00:09:01.648 "nvme_admin": false, 00:09:01.648 "nvme_io": false, 00:09:01.648 "nvme_io_md": false, 00:09:01.648 "write_zeroes": true, 00:09:01.648 "zcopy": true, 00:09:01.648 "get_zone_info": false, 00:09:01.648 "zone_management": false, 00:09:01.648 "zone_append": false, 00:09:01.648 "compare": false, 00:09:01.648 "compare_and_write": false, 00:09:01.648 "abort": true, 00:09:01.648 "seek_hole": false, 00:09:01.648 "seek_data": false, 00:09:01.648 "copy": true, 00:09:01.648 "nvme_iov_md": false 00:09:01.648 }, 00:09:01.648 "memory_domains": [ 00:09:01.648 { 00:09:01.648 "dma_device_id": "system", 00:09:01.648 "dma_device_type": 1 00:09:01.648 }, 00:09:01.648 { 00:09:01.648 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:01.648 "dma_device_type": 2 00:09:01.648 } 00:09:01.648 ], 00:09:01.648 "driver_specific": {} 00:09:01.648 } 00:09:01.648 ] 00:09:01.648 15:16:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.648 15:16:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:01.648 15:16:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:01.648 15:16:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:01.648 15:16:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:01.648 15:16:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.648 15:16:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.907 BaseBdev3 00:09:01.907 15:16:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.907 15:16:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:01.907 15:16:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:01.907 15:16:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:01.907 15:16:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:01.907 15:16:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:01.907 15:16:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:01.907 15:16:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:01.907 15:16:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.907 15:16:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.907 15:16:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.907 15:16:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:01.907 15:16:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.907 15:16:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.907 [ 00:09:01.907 { 00:09:01.907 "name": "BaseBdev3", 00:09:01.907 "aliases": [ 00:09:01.907 "80d26d62-b17a-469e-ab72-6f865eb07b1b" 00:09:01.907 ], 00:09:01.907 "product_name": "Malloc disk", 00:09:01.907 "block_size": 512, 00:09:01.907 "num_blocks": 65536, 00:09:01.907 "uuid": "80d26d62-b17a-469e-ab72-6f865eb07b1b", 00:09:01.907 "assigned_rate_limits": { 00:09:01.907 "rw_ios_per_sec": 0, 00:09:01.908 "rw_mbytes_per_sec": 0, 00:09:01.908 "r_mbytes_per_sec": 0, 00:09:01.908 "w_mbytes_per_sec": 0 00:09:01.908 }, 00:09:01.908 "claimed": false, 00:09:01.908 "zoned": false, 00:09:01.908 "supported_io_types": { 00:09:01.908 "read": true, 00:09:01.908 "write": true, 00:09:01.908 "unmap": true, 00:09:01.908 "flush": true, 00:09:01.908 "reset": true, 00:09:01.908 "nvme_admin": false, 00:09:01.908 "nvme_io": false, 00:09:01.908 "nvme_io_md": false, 00:09:01.908 "write_zeroes": true, 00:09:01.908 "zcopy": true, 00:09:01.908 "get_zone_info": false, 00:09:01.908 "zone_management": false, 00:09:01.908 "zone_append": false, 00:09:01.908 "compare": false, 00:09:01.908 "compare_and_write": false, 00:09:01.908 "abort": true, 00:09:01.908 "seek_hole": false, 00:09:01.908 "seek_data": false, 00:09:01.908 "copy": true, 00:09:01.908 "nvme_iov_md": false 00:09:01.908 }, 00:09:01.908 "memory_domains": [ 00:09:01.908 { 00:09:01.908 "dma_device_id": "system", 00:09:01.908 "dma_device_type": 1 00:09:01.908 }, 00:09:01.908 { 00:09:01.908 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:01.908 "dma_device_type": 2 00:09:01.908 } 00:09:01.908 ], 00:09:01.908 "driver_specific": {} 00:09:01.908 } 00:09:01.908 ] 00:09:01.908 15:16:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.908 15:16:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:01.908 15:16:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:01.908 15:16:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:01.908 15:16:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:01.908 15:16:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.908 15:16:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.908 [2024-11-20 15:16:48.202485] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:01.908 [2024-11-20 15:16:48.202634] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:01.908 [2024-11-20 15:16:48.202744] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:01.908 [2024-11-20 15:16:48.204793] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:01.908 15:16:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.908 15:16:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:01.908 15:16:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:01.908 15:16:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:01.908 15:16:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:01.908 15:16:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:01.908 15:16:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:01.908 15:16:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:01.908 15:16:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:01.908 15:16:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:01.908 15:16:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:01.908 15:16:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:01.908 15:16:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:01.908 15:16:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.908 15:16:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.908 15:16:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.908 15:16:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:01.908 "name": "Existed_Raid", 00:09:01.908 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:01.908 "strip_size_kb": 64, 00:09:01.908 "state": "configuring", 00:09:01.908 "raid_level": "concat", 00:09:01.908 "superblock": false, 00:09:01.908 "num_base_bdevs": 3, 00:09:01.908 "num_base_bdevs_discovered": 2, 00:09:01.908 "num_base_bdevs_operational": 3, 00:09:01.908 "base_bdevs_list": [ 00:09:01.908 { 00:09:01.908 "name": "BaseBdev1", 00:09:01.908 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:01.908 "is_configured": false, 00:09:01.908 "data_offset": 0, 00:09:01.908 "data_size": 0 00:09:01.908 }, 00:09:01.908 { 00:09:01.908 "name": "BaseBdev2", 00:09:01.908 "uuid": "c22dfaaf-b353-448c-b3da-711239b1ee7e", 00:09:01.908 "is_configured": true, 00:09:01.908 "data_offset": 0, 00:09:01.908 "data_size": 65536 00:09:01.908 }, 00:09:01.908 { 00:09:01.908 "name": "BaseBdev3", 00:09:01.908 "uuid": "80d26d62-b17a-469e-ab72-6f865eb07b1b", 00:09:01.908 "is_configured": true, 00:09:01.908 "data_offset": 0, 00:09:01.908 "data_size": 65536 00:09:01.908 } 00:09:01.908 ] 00:09:01.908 }' 00:09:01.908 15:16:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:01.908 15:16:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.167 15:16:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:02.167 15:16:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.167 15:16:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.167 [2024-11-20 15:16:48.641885] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:02.425 15:16:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.425 15:16:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:02.425 15:16:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:02.425 15:16:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:02.425 15:16:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:02.425 15:16:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:02.425 15:16:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:02.425 15:16:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:02.425 15:16:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:02.425 15:16:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:02.425 15:16:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:02.425 15:16:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:02.425 15:16:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:02.425 15:16:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.425 15:16:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.425 15:16:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.425 15:16:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:02.425 "name": "Existed_Raid", 00:09:02.425 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:02.425 "strip_size_kb": 64, 00:09:02.425 "state": "configuring", 00:09:02.425 "raid_level": "concat", 00:09:02.425 "superblock": false, 00:09:02.425 "num_base_bdevs": 3, 00:09:02.425 "num_base_bdevs_discovered": 1, 00:09:02.425 "num_base_bdevs_operational": 3, 00:09:02.425 "base_bdevs_list": [ 00:09:02.425 { 00:09:02.425 "name": "BaseBdev1", 00:09:02.425 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:02.425 "is_configured": false, 00:09:02.425 "data_offset": 0, 00:09:02.425 "data_size": 0 00:09:02.425 }, 00:09:02.425 { 00:09:02.425 "name": null, 00:09:02.425 "uuid": "c22dfaaf-b353-448c-b3da-711239b1ee7e", 00:09:02.425 "is_configured": false, 00:09:02.425 "data_offset": 0, 00:09:02.425 "data_size": 65536 00:09:02.425 }, 00:09:02.425 { 00:09:02.425 "name": "BaseBdev3", 00:09:02.425 "uuid": "80d26d62-b17a-469e-ab72-6f865eb07b1b", 00:09:02.425 "is_configured": true, 00:09:02.425 "data_offset": 0, 00:09:02.425 "data_size": 65536 00:09:02.425 } 00:09:02.425 ] 00:09:02.425 }' 00:09:02.425 15:16:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:02.425 15:16:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.684 15:16:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:02.684 15:16:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:02.684 15:16:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.684 15:16:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.684 15:16:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.684 15:16:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:02.684 15:16:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:02.684 15:16:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.684 15:16:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.684 [2024-11-20 15:16:49.107181] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:02.684 BaseBdev1 00:09:02.684 15:16:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.684 15:16:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:02.684 15:16:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:02.684 15:16:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:02.684 15:16:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:02.684 15:16:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:02.684 15:16:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:02.684 15:16:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:02.684 15:16:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.684 15:16:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.684 15:16:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.684 15:16:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:02.684 15:16:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.684 15:16:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.684 [ 00:09:02.684 { 00:09:02.684 "name": "BaseBdev1", 00:09:02.684 "aliases": [ 00:09:02.684 "70964285-2e99-4870-b743-61a2619fd796" 00:09:02.684 ], 00:09:02.684 "product_name": "Malloc disk", 00:09:02.684 "block_size": 512, 00:09:02.684 "num_blocks": 65536, 00:09:02.684 "uuid": "70964285-2e99-4870-b743-61a2619fd796", 00:09:02.684 "assigned_rate_limits": { 00:09:02.684 "rw_ios_per_sec": 0, 00:09:02.684 "rw_mbytes_per_sec": 0, 00:09:02.684 "r_mbytes_per_sec": 0, 00:09:02.684 "w_mbytes_per_sec": 0 00:09:02.684 }, 00:09:02.684 "claimed": true, 00:09:02.684 "claim_type": "exclusive_write", 00:09:02.684 "zoned": false, 00:09:02.684 "supported_io_types": { 00:09:02.684 "read": true, 00:09:02.684 "write": true, 00:09:02.684 "unmap": true, 00:09:02.684 "flush": true, 00:09:02.684 "reset": true, 00:09:02.684 "nvme_admin": false, 00:09:02.684 "nvme_io": false, 00:09:02.684 "nvme_io_md": false, 00:09:02.684 "write_zeroes": true, 00:09:02.684 "zcopy": true, 00:09:02.684 "get_zone_info": false, 00:09:02.684 "zone_management": false, 00:09:02.684 "zone_append": false, 00:09:02.684 "compare": false, 00:09:02.684 "compare_and_write": false, 00:09:02.684 "abort": true, 00:09:02.684 "seek_hole": false, 00:09:02.684 "seek_data": false, 00:09:02.684 "copy": true, 00:09:02.684 "nvme_iov_md": false 00:09:02.684 }, 00:09:02.684 "memory_domains": [ 00:09:02.684 { 00:09:02.684 "dma_device_id": "system", 00:09:02.684 "dma_device_type": 1 00:09:02.684 }, 00:09:02.684 { 00:09:02.684 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:02.684 "dma_device_type": 2 00:09:02.684 } 00:09:02.684 ], 00:09:02.684 "driver_specific": {} 00:09:02.684 } 00:09:02.684 ] 00:09:02.684 15:16:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.684 15:16:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:02.684 15:16:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:02.684 15:16:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:02.684 15:16:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:02.684 15:16:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:02.684 15:16:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:02.684 15:16:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:02.684 15:16:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:02.684 15:16:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:02.684 15:16:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:02.684 15:16:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:02.684 15:16:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:02.684 15:16:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:02.684 15:16:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.684 15:16:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.942 15:16:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.942 15:16:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:02.942 "name": "Existed_Raid", 00:09:02.942 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:02.942 "strip_size_kb": 64, 00:09:02.942 "state": "configuring", 00:09:02.942 "raid_level": "concat", 00:09:02.942 "superblock": false, 00:09:02.942 "num_base_bdevs": 3, 00:09:02.942 "num_base_bdevs_discovered": 2, 00:09:02.942 "num_base_bdevs_operational": 3, 00:09:02.942 "base_bdevs_list": [ 00:09:02.942 { 00:09:02.942 "name": "BaseBdev1", 00:09:02.942 "uuid": "70964285-2e99-4870-b743-61a2619fd796", 00:09:02.942 "is_configured": true, 00:09:02.942 "data_offset": 0, 00:09:02.942 "data_size": 65536 00:09:02.942 }, 00:09:02.942 { 00:09:02.942 "name": null, 00:09:02.942 "uuid": "c22dfaaf-b353-448c-b3da-711239b1ee7e", 00:09:02.942 "is_configured": false, 00:09:02.942 "data_offset": 0, 00:09:02.942 "data_size": 65536 00:09:02.942 }, 00:09:02.942 { 00:09:02.942 "name": "BaseBdev3", 00:09:02.942 "uuid": "80d26d62-b17a-469e-ab72-6f865eb07b1b", 00:09:02.942 "is_configured": true, 00:09:02.942 "data_offset": 0, 00:09:02.942 "data_size": 65536 00:09:02.942 } 00:09:02.942 ] 00:09:02.942 }' 00:09:02.942 15:16:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:02.942 15:16:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.200 15:16:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:03.200 15:16:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:03.200 15:16:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.200 15:16:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.200 15:16:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.200 15:16:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:03.200 15:16:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:03.200 15:16:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.200 15:16:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.200 [2024-11-20 15:16:49.579221] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:03.200 15:16:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.200 15:16:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:03.200 15:16:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:03.200 15:16:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:03.200 15:16:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:03.200 15:16:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:03.200 15:16:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:03.200 15:16:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:03.200 15:16:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:03.200 15:16:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:03.200 15:16:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:03.200 15:16:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:03.200 15:16:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.200 15:16:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:03.200 15:16:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.200 15:16:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.200 15:16:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:03.200 "name": "Existed_Raid", 00:09:03.200 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:03.200 "strip_size_kb": 64, 00:09:03.200 "state": "configuring", 00:09:03.200 "raid_level": "concat", 00:09:03.200 "superblock": false, 00:09:03.200 "num_base_bdevs": 3, 00:09:03.200 "num_base_bdevs_discovered": 1, 00:09:03.200 "num_base_bdevs_operational": 3, 00:09:03.200 "base_bdevs_list": [ 00:09:03.200 { 00:09:03.200 "name": "BaseBdev1", 00:09:03.200 "uuid": "70964285-2e99-4870-b743-61a2619fd796", 00:09:03.200 "is_configured": true, 00:09:03.200 "data_offset": 0, 00:09:03.200 "data_size": 65536 00:09:03.200 }, 00:09:03.200 { 00:09:03.200 "name": null, 00:09:03.200 "uuid": "c22dfaaf-b353-448c-b3da-711239b1ee7e", 00:09:03.200 "is_configured": false, 00:09:03.200 "data_offset": 0, 00:09:03.200 "data_size": 65536 00:09:03.200 }, 00:09:03.200 { 00:09:03.200 "name": null, 00:09:03.200 "uuid": "80d26d62-b17a-469e-ab72-6f865eb07b1b", 00:09:03.200 "is_configured": false, 00:09:03.200 "data_offset": 0, 00:09:03.200 "data_size": 65536 00:09:03.200 } 00:09:03.200 ] 00:09:03.200 }' 00:09:03.200 15:16:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:03.200 15:16:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.768 15:16:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:03.768 15:16:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:03.768 15:16:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.768 15:16:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.768 15:16:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.768 15:16:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:03.768 15:16:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:03.768 15:16:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.768 15:16:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.768 [2024-11-20 15:16:50.046818] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:03.768 15:16:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.768 15:16:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:03.768 15:16:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:03.768 15:16:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:03.768 15:16:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:03.768 15:16:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:03.768 15:16:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:03.768 15:16:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:03.768 15:16:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:03.768 15:16:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:03.768 15:16:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:03.768 15:16:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:03.768 15:16:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.768 15:16:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.768 15:16:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:03.768 15:16:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.768 15:16:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:03.768 "name": "Existed_Raid", 00:09:03.768 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:03.768 "strip_size_kb": 64, 00:09:03.768 "state": "configuring", 00:09:03.768 "raid_level": "concat", 00:09:03.768 "superblock": false, 00:09:03.768 "num_base_bdevs": 3, 00:09:03.768 "num_base_bdevs_discovered": 2, 00:09:03.768 "num_base_bdevs_operational": 3, 00:09:03.768 "base_bdevs_list": [ 00:09:03.768 { 00:09:03.768 "name": "BaseBdev1", 00:09:03.768 "uuid": "70964285-2e99-4870-b743-61a2619fd796", 00:09:03.768 "is_configured": true, 00:09:03.768 "data_offset": 0, 00:09:03.768 "data_size": 65536 00:09:03.768 }, 00:09:03.768 { 00:09:03.768 "name": null, 00:09:03.768 "uuid": "c22dfaaf-b353-448c-b3da-711239b1ee7e", 00:09:03.768 "is_configured": false, 00:09:03.768 "data_offset": 0, 00:09:03.768 "data_size": 65536 00:09:03.768 }, 00:09:03.768 { 00:09:03.768 "name": "BaseBdev3", 00:09:03.768 "uuid": "80d26d62-b17a-469e-ab72-6f865eb07b1b", 00:09:03.768 "is_configured": true, 00:09:03.768 "data_offset": 0, 00:09:03.768 "data_size": 65536 00:09:03.768 } 00:09:03.768 ] 00:09:03.768 }' 00:09:03.768 15:16:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:03.768 15:16:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.028 15:16:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:04.028 15:16:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:04.028 15:16:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.028 15:16:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.028 15:16:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.028 15:16:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:04.028 15:16:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:04.028 15:16:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.028 15:16:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.028 [2024-11-20 15:16:50.506183] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:04.287 15:16:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.287 15:16:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:04.287 15:16:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:04.287 15:16:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:04.287 15:16:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:04.287 15:16:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:04.287 15:16:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:04.287 15:16:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:04.287 15:16:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:04.287 15:16:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:04.287 15:16:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:04.287 15:16:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:04.287 15:16:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:04.287 15:16:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.287 15:16:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.287 15:16:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.287 15:16:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:04.287 "name": "Existed_Raid", 00:09:04.287 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:04.287 "strip_size_kb": 64, 00:09:04.287 "state": "configuring", 00:09:04.287 "raid_level": "concat", 00:09:04.287 "superblock": false, 00:09:04.287 "num_base_bdevs": 3, 00:09:04.287 "num_base_bdevs_discovered": 1, 00:09:04.287 "num_base_bdevs_operational": 3, 00:09:04.287 "base_bdevs_list": [ 00:09:04.287 { 00:09:04.287 "name": null, 00:09:04.287 "uuid": "70964285-2e99-4870-b743-61a2619fd796", 00:09:04.287 "is_configured": false, 00:09:04.287 "data_offset": 0, 00:09:04.287 "data_size": 65536 00:09:04.287 }, 00:09:04.287 { 00:09:04.287 "name": null, 00:09:04.287 "uuid": "c22dfaaf-b353-448c-b3da-711239b1ee7e", 00:09:04.287 "is_configured": false, 00:09:04.287 "data_offset": 0, 00:09:04.287 "data_size": 65536 00:09:04.287 }, 00:09:04.287 { 00:09:04.287 "name": "BaseBdev3", 00:09:04.287 "uuid": "80d26d62-b17a-469e-ab72-6f865eb07b1b", 00:09:04.287 "is_configured": true, 00:09:04.287 "data_offset": 0, 00:09:04.287 "data_size": 65536 00:09:04.287 } 00:09:04.287 ] 00:09:04.287 }' 00:09:04.287 15:16:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:04.287 15:16:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.914 15:16:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:04.914 15:16:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.914 15:16:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.914 15:16:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:04.914 15:16:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.914 15:16:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:04.914 15:16:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:04.914 15:16:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.914 15:16:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.914 [2024-11-20 15:16:51.085801] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:04.914 15:16:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.914 15:16:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:04.914 15:16:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:04.914 15:16:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:04.914 15:16:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:04.914 15:16:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:04.914 15:16:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:04.914 15:16:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:04.914 15:16:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:04.914 15:16:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:04.914 15:16:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:04.914 15:16:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:04.914 15:16:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.914 15:16:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.914 15:16:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:04.914 15:16:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.914 15:16:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:04.914 "name": "Existed_Raid", 00:09:04.914 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:04.914 "strip_size_kb": 64, 00:09:04.914 "state": "configuring", 00:09:04.914 "raid_level": "concat", 00:09:04.914 "superblock": false, 00:09:04.914 "num_base_bdevs": 3, 00:09:04.914 "num_base_bdevs_discovered": 2, 00:09:04.914 "num_base_bdevs_operational": 3, 00:09:04.914 "base_bdevs_list": [ 00:09:04.914 { 00:09:04.914 "name": null, 00:09:04.914 "uuid": "70964285-2e99-4870-b743-61a2619fd796", 00:09:04.914 "is_configured": false, 00:09:04.914 "data_offset": 0, 00:09:04.914 "data_size": 65536 00:09:04.914 }, 00:09:04.914 { 00:09:04.914 "name": "BaseBdev2", 00:09:04.914 "uuid": "c22dfaaf-b353-448c-b3da-711239b1ee7e", 00:09:04.914 "is_configured": true, 00:09:04.914 "data_offset": 0, 00:09:04.914 "data_size": 65536 00:09:04.914 }, 00:09:04.914 { 00:09:04.914 "name": "BaseBdev3", 00:09:04.914 "uuid": "80d26d62-b17a-469e-ab72-6f865eb07b1b", 00:09:04.914 "is_configured": true, 00:09:04.914 "data_offset": 0, 00:09:04.914 "data_size": 65536 00:09:04.914 } 00:09:04.914 ] 00:09:04.914 }' 00:09:04.914 15:16:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:04.914 15:16:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.188 15:16:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:05.188 15:16:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:05.188 15:16:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.188 15:16:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.188 15:16:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.188 15:16:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:05.188 15:16:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:05.189 15:16:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:05.189 15:16:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.189 15:16:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.189 15:16:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.189 15:16:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 70964285-2e99-4870-b743-61a2619fd796 00:09:05.189 15:16:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.189 15:16:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.189 [2024-11-20 15:16:51.612431] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:05.189 [2024-11-20 15:16:51.612686] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:05.189 [2024-11-20 15:16:51.612712] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:09:05.189 [2024-11-20 15:16:51.612984] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:05.189 [2024-11-20 15:16:51.613138] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:05.189 [2024-11-20 15:16:51.613148] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:09:05.189 [2024-11-20 15:16:51.613396] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:05.189 NewBaseBdev 00:09:05.189 15:16:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.189 15:16:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:05.189 15:16:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:09:05.189 15:16:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:05.189 15:16:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:05.189 15:16:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:05.189 15:16:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:05.189 15:16:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:05.189 15:16:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.189 15:16:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.189 15:16:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.189 15:16:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:05.189 15:16:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.189 15:16:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.189 [ 00:09:05.189 { 00:09:05.189 "name": "NewBaseBdev", 00:09:05.189 "aliases": [ 00:09:05.189 "70964285-2e99-4870-b743-61a2619fd796" 00:09:05.189 ], 00:09:05.189 "product_name": "Malloc disk", 00:09:05.189 "block_size": 512, 00:09:05.189 "num_blocks": 65536, 00:09:05.189 "uuid": "70964285-2e99-4870-b743-61a2619fd796", 00:09:05.189 "assigned_rate_limits": { 00:09:05.189 "rw_ios_per_sec": 0, 00:09:05.189 "rw_mbytes_per_sec": 0, 00:09:05.189 "r_mbytes_per_sec": 0, 00:09:05.189 "w_mbytes_per_sec": 0 00:09:05.189 }, 00:09:05.189 "claimed": true, 00:09:05.189 "claim_type": "exclusive_write", 00:09:05.189 "zoned": false, 00:09:05.189 "supported_io_types": { 00:09:05.189 "read": true, 00:09:05.189 "write": true, 00:09:05.189 "unmap": true, 00:09:05.189 "flush": true, 00:09:05.189 "reset": true, 00:09:05.189 "nvme_admin": false, 00:09:05.189 "nvme_io": false, 00:09:05.189 "nvme_io_md": false, 00:09:05.189 "write_zeroes": true, 00:09:05.189 "zcopy": true, 00:09:05.189 "get_zone_info": false, 00:09:05.189 "zone_management": false, 00:09:05.189 "zone_append": false, 00:09:05.189 "compare": false, 00:09:05.189 "compare_and_write": false, 00:09:05.189 "abort": true, 00:09:05.189 "seek_hole": false, 00:09:05.189 "seek_data": false, 00:09:05.189 "copy": true, 00:09:05.189 "nvme_iov_md": false 00:09:05.448 }, 00:09:05.448 "memory_domains": [ 00:09:05.448 { 00:09:05.448 "dma_device_id": "system", 00:09:05.448 "dma_device_type": 1 00:09:05.448 }, 00:09:05.448 { 00:09:05.448 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:05.448 "dma_device_type": 2 00:09:05.449 } 00:09:05.449 ], 00:09:05.449 "driver_specific": {} 00:09:05.449 } 00:09:05.449 ] 00:09:05.449 15:16:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.449 15:16:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:05.449 15:16:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:09:05.449 15:16:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:05.449 15:16:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:05.449 15:16:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:05.449 15:16:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:05.449 15:16:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:05.449 15:16:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:05.449 15:16:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:05.449 15:16:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:05.449 15:16:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:05.449 15:16:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:05.449 15:16:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:05.449 15:16:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.449 15:16:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.449 15:16:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.449 15:16:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:05.449 "name": "Existed_Raid", 00:09:05.449 "uuid": "b542416d-c37d-4661-ad1b-3285e3ae99a1", 00:09:05.449 "strip_size_kb": 64, 00:09:05.449 "state": "online", 00:09:05.449 "raid_level": "concat", 00:09:05.449 "superblock": false, 00:09:05.449 "num_base_bdevs": 3, 00:09:05.449 "num_base_bdevs_discovered": 3, 00:09:05.449 "num_base_bdevs_operational": 3, 00:09:05.449 "base_bdevs_list": [ 00:09:05.449 { 00:09:05.449 "name": "NewBaseBdev", 00:09:05.449 "uuid": "70964285-2e99-4870-b743-61a2619fd796", 00:09:05.449 "is_configured": true, 00:09:05.449 "data_offset": 0, 00:09:05.449 "data_size": 65536 00:09:05.449 }, 00:09:05.449 { 00:09:05.449 "name": "BaseBdev2", 00:09:05.449 "uuid": "c22dfaaf-b353-448c-b3da-711239b1ee7e", 00:09:05.449 "is_configured": true, 00:09:05.449 "data_offset": 0, 00:09:05.449 "data_size": 65536 00:09:05.449 }, 00:09:05.449 { 00:09:05.449 "name": "BaseBdev3", 00:09:05.449 "uuid": "80d26d62-b17a-469e-ab72-6f865eb07b1b", 00:09:05.449 "is_configured": true, 00:09:05.449 "data_offset": 0, 00:09:05.449 "data_size": 65536 00:09:05.449 } 00:09:05.449 ] 00:09:05.449 }' 00:09:05.449 15:16:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:05.449 15:16:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.709 15:16:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:05.709 15:16:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:05.709 15:16:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:05.709 15:16:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:05.709 15:16:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:05.709 15:16:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:05.709 15:16:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:05.709 15:16:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:05.709 15:16:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.709 15:16:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.709 [2024-11-20 15:16:52.104079] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:05.709 15:16:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.709 15:16:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:05.709 "name": "Existed_Raid", 00:09:05.709 "aliases": [ 00:09:05.709 "b542416d-c37d-4661-ad1b-3285e3ae99a1" 00:09:05.709 ], 00:09:05.709 "product_name": "Raid Volume", 00:09:05.709 "block_size": 512, 00:09:05.709 "num_blocks": 196608, 00:09:05.709 "uuid": "b542416d-c37d-4661-ad1b-3285e3ae99a1", 00:09:05.709 "assigned_rate_limits": { 00:09:05.709 "rw_ios_per_sec": 0, 00:09:05.709 "rw_mbytes_per_sec": 0, 00:09:05.709 "r_mbytes_per_sec": 0, 00:09:05.709 "w_mbytes_per_sec": 0 00:09:05.709 }, 00:09:05.709 "claimed": false, 00:09:05.709 "zoned": false, 00:09:05.709 "supported_io_types": { 00:09:05.709 "read": true, 00:09:05.709 "write": true, 00:09:05.709 "unmap": true, 00:09:05.709 "flush": true, 00:09:05.709 "reset": true, 00:09:05.709 "nvme_admin": false, 00:09:05.709 "nvme_io": false, 00:09:05.709 "nvme_io_md": false, 00:09:05.709 "write_zeroes": true, 00:09:05.709 "zcopy": false, 00:09:05.709 "get_zone_info": false, 00:09:05.709 "zone_management": false, 00:09:05.709 "zone_append": false, 00:09:05.709 "compare": false, 00:09:05.709 "compare_and_write": false, 00:09:05.709 "abort": false, 00:09:05.709 "seek_hole": false, 00:09:05.709 "seek_data": false, 00:09:05.709 "copy": false, 00:09:05.709 "nvme_iov_md": false 00:09:05.709 }, 00:09:05.709 "memory_domains": [ 00:09:05.709 { 00:09:05.709 "dma_device_id": "system", 00:09:05.709 "dma_device_type": 1 00:09:05.709 }, 00:09:05.709 { 00:09:05.709 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:05.709 "dma_device_type": 2 00:09:05.709 }, 00:09:05.709 { 00:09:05.709 "dma_device_id": "system", 00:09:05.709 "dma_device_type": 1 00:09:05.709 }, 00:09:05.709 { 00:09:05.709 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:05.709 "dma_device_type": 2 00:09:05.709 }, 00:09:05.709 { 00:09:05.709 "dma_device_id": "system", 00:09:05.709 "dma_device_type": 1 00:09:05.709 }, 00:09:05.709 { 00:09:05.709 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:05.709 "dma_device_type": 2 00:09:05.709 } 00:09:05.709 ], 00:09:05.709 "driver_specific": { 00:09:05.709 "raid": { 00:09:05.709 "uuid": "b542416d-c37d-4661-ad1b-3285e3ae99a1", 00:09:05.709 "strip_size_kb": 64, 00:09:05.709 "state": "online", 00:09:05.709 "raid_level": "concat", 00:09:05.709 "superblock": false, 00:09:05.709 "num_base_bdevs": 3, 00:09:05.709 "num_base_bdevs_discovered": 3, 00:09:05.709 "num_base_bdevs_operational": 3, 00:09:05.709 "base_bdevs_list": [ 00:09:05.709 { 00:09:05.709 "name": "NewBaseBdev", 00:09:05.709 "uuid": "70964285-2e99-4870-b743-61a2619fd796", 00:09:05.709 "is_configured": true, 00:09:05.709 "data_offset": 0, 00:09:05.709 "data_size": 65536 00:09:05.709 }, 00:09:05.709 { 00:09:05.709 "name": "BaseBdev2", 00:09:05.709 "uuid": "c22dfaaf-b353-448c-b3da-711239b1ee7e", 00:09:05.709 "is_configured": true, 00:09:05.709 "data_offset": 0, 00:09:05.709 "data_size": 65536 00:09:05.709 }, 00:09:05.709 { 00:09:05.709 "name": "BaseBdev3", 00:09:05.709 "uuid": "80d26d62-b17a-469e-ab72-6f865eb07b1b", 00:09:05.709 "is_configured": true, 00:09:05.709 "data_offset": 0, 00:09:05.709 "data_size": 65536 00:09:05.709 } 00:09:05.709 ] 00:09:05.709 } 00:09:05.709 } 00:09:05.709 }' 00:09:05.709 15:16:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:05.709 15:16:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:05.709 BaseBdev2 00:09:05.709 BaseBdev3' 00:09:05.969 15:16:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:05.969 15:16:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:05.969 15:16:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:05.969 15:16:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:05.969 15:16:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:05.969 15:16:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.969 15:16:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.969 15:16:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.969 15:16:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:05.969 15:16:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:05.969 15:16:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:05.969 15:16:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:05.969 15:16:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:05.969 15:16:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.969 15:16:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.969 15:16:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.969 15:16:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:05.969 15:16:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:05.969 15:16:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:05.969 15:16:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:05.969 15:16:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.969 15:16:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:05.969 15:16:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.969 15:16:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.969 15:16:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:05.969 15:16:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:05.969 15:16:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:05.969 15:16:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.969 15:16:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.969 [2024-11-20 15:16:52.379693] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:05.969 [2024-11-20 15:16:52.379721] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:05.969 [2024-11-20 15:16:52.379803] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:05.969 [2024-11-20 15:16:52.379856] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:05.969 [2024-11-20 15:16:52.379878] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:09:05.969 15:16:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.969 15:16:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 65474 00:09:05.969 15:16:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 65474 ']' 00:09:05.969 15:16:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 65474 00:09:05.969 15:16:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:09:05.970 15:16:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:05.970 15:16:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65474 00:09:05.970 killing process with pid 65474 00:09:05.970 15:16:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:05.970 15:16:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:05.970 15:16:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65474' 00:09:05.970 15:16:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 65474 00:09:05.970 [2024-11-20 15:16:52.427348] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:05.970 15:16:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 65474 00:09:06.538 [2024-11-20 15:16:52.731642] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:07.475 15:16:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:09:07.475 00:09:07.475 real 0m10.304s 00:09:07.475 user 0m16.272s 00:09:07.475 sys 0m2.088s 00:09:07.475 15:16:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:07.475 15:16:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.475 ************************************ 00:09:07.475 END TEST raid_state_function_test 00:09:07.475 ************************************ 00:09:07.475 15:16:53 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:09:07.475 15:16:53 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:07.475 15:16:53 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:07.475 15:16:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:07.475 ************************************ 00:09:07.475 START TEST raid_state_function_test_sb 00:09:07.475 ************************************ 00:09:07.475 15:16:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 3 true 00:09:07.475 15:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:09:07.475 15:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:09:07.475 15:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:09:07.475 15:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:07.475 15:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:07.475 15:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:07.475 15:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:07.475 15:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:07.475 15:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:07.475 15:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:07.475 15:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:07.476 15:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:07.476 15:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:07.476 15:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:07.476 15:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:07.733 15:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:07.733 15:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:07.733 15:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:07.733 15:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:07.733 15:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:07.733 15:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:07.733 15:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:09:07.733 15:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:07.734 15:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:07.734 15:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:09:07.734 15:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:09:07.734 15:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=66094 00:09:07.734 15:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:07.734 Process raid pid: 66094 00:09:07.734 15:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 66094' 00:09:07.734 15:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 66094 00:09:07.734 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:07.734 15:16:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 66094 ']' 00:09:07.734 15:16:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:07.734 15:16:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:07.734 15:16:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:07.734 15:16:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:07.734 15:16:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.734 [2024-11-20 15:16:54.051706] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:09:07.734 [2024-11-20 15:16:54.052016] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:07.993 [2024-11-20 15:16:54.233145] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:07.993 [2024-11-20 15:16:54.351597] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:08.252 [2024-11-20 15:16:54.569079] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:08.252 [2024-11-20 15:16:54.569305] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:08.511 15:16:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:08.511 15:16:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:09:08.511 15:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:08.511 15:16:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.511 15:16:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.511 [2024-11-20 15:16:54.954306] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:08.511 [2024-11-20 15:16:54.954366] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:08.511 [2024-11-20 15:16:54.954378] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:08.511 [2024-11-20 15:16:54.954391] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:08.511 [2024-11-20 15:16:54.954399] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:08.512 [2024-11-20 15:16:54.954411] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:08.512 15:16:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.512 15:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:08.512 15:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:08.512 15:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:08.512 15:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:08.512 15:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:08.512 15:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:08.512 15:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:08.512 15:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:08.512 15:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:08.512 15:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:08.512 15:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:08.512 15:16:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.512 15:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:08.512 15:16:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.512 15:16:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.771 15:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:08.771 "name": "Existed_Raid", 00:09:08.771 "uuid": "99a65b60-dea9-427e-a45f-eda15f86ec12", 00:09:08.771 "strip_size_kb": 64, 00:09:08.771 "state": "configuring", 00:09:08.771 "raid_level": "concat", 00:09:08.771 "superblock": true, 00:09:08.771 "num_base_bdevs": 3, 00:09:08.771 "num_base_bdevs_discovered": 0, 00:09:08.771 "num_base_bdevs_operational": 3, 00:09:08.771 "base_bdevs_list": [ 00:09:08.771 { 00:09:08.771 "name": "BaseBdev1", 00:09:08.771 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:08.771 "is_configured": false, 00:09:08.771 "data_offset": 0, 00:09:08.771 "data_size": 0 00:09:08.771 }, 00:09:08.771 { 00:09:08.771 "name": "BaseBdev2", 00:09:08.771 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:08.771 "is_configured": false, 00:09:08.771 "data_offset": 0, 00:09:08.771 "data_size": 0 00:09:08.771 }, 00:09:08.771 { 00:09:08.771 "name": "BaseBdev3", 00:09:08.771 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:08.771 "is_configured": false, 00:09:08.771 "data_offset": 0, 00:09:08.771 "data_size": 0 00:09:08.771 } 00:09:08.771 ] 00:09:08.771 }' 00:09:08.771 15:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:08.771 15:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.031 15:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:09.031 15:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.031 15:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.031 [2024-11-20 15:16:55.349719] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:09.031 [2024-11-20 15:16:55.349757] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:09.031 15:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.031 15:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:09.031 15:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.031 15:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.031 [2024-11-20 15:16:55.361714] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:09.031 [2024-11-20 15:16:55.361763] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:09.031 [2024-11-20 15:16:55.361786] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:09.031 [2024-11-20 15:16:55.361799] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:09.031 [2024-11-20 15:16:55.361806] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:09.031 [2024-11-20 15:16:55.361818] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:09.031 15:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.031 15:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:09.031 15:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.031 15:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.031 [2024-11-20 15:16:55.413971] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:09.031 BaseBdev1 00:09:09.031 15:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.031 15:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:09.031 15:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:09.031 15:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:09.031 15:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:09.031 15:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:09.031 15:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:09.031 15:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:09.031 15:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.031 15:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.031 15:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.031 15:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:09.031 15:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.031 15:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.031 [ 00:09:09.031 { 00:09:09.031 "name": "BaseBdev1", 00:09:09.031 "aliases": [ 00:09:09.031 "52af21d9-92e7-4a54-9b6c-ed01d36639a6" 00:09:09.031 ], 00:09:09.031 "product_name": "Malloc disk", 00:09:09.031 "block_size": 512, 00:09:09.031 "num_blocks": 65536, 00:09:09.031 "uuid": "52af21d9-92e7-4a54-9b6c-ed01d36639a6", 00:09:09.031 "assigned_rate_limits": { 00:09:09.031 "rw_ios_per_sec": 0, 00:09:09.031 "rw_mbytes_per_sec": 0, 00:09:09.031 "r_mbytes_per_sec": 0, 00:09:09.031 "w_mbytes_per_sec": 0 00:09:09.031 }, 00:09:09.031 "claimed": true, 00:09:09.031 "claim_type": "exclusive_write", 00:09:09.031 "zoned": false, 00:09:09.031 "supported_io_types": { 00:09:09.031 "read": true, 00:09:09.031 "write": true, 00:09:09.031 "unmap": true, 00:09:09.031 "flush": true, 00:09:09.031 "reset": true, 00:09:09.031 "nvme_admin": false, 00:09:09.031 "nvme_io": false, 00:09:09.031 "nvme_io_md": false, 00:09:09.031 "write_zeroes": true, 00:09:09.031 "zcopy": true, 00:09:09.031 "get_zone_info": false, 00:09:09.031 "zone_management": false, 00:09:09.031 "zone_append": false, 00:09:09.031 "compare": false, 00:09:09.031 "compare_and_write": false, 00:09:09.031 "abort": true, 00:09:09.031 "seek_hole": false, 00:09:09.031 "seek_data": false, 00:09:09.031 "copy": true, 00:09:09.031 "nvme_iov_md": false 00:09:09.031 }, 00:09:09.031 "memory_domains": [ 00:09:09.031 { 00:09:09.031 "dma_device_id": "system", 00:09:09.031 "dma_device_type": 1 00:09:09.031 }, 00:09:09.031 { 00:09:09.031 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:09.031 "dma_device_type": 2 00:09:09.031 } 00:09:09.031 ], 00:09:09.031 "driver_specific": {} 00:09:09.031 } 00:09:09.031 ] 00:09:09.031 15:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.031 15:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:09.031 15:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:09.031 15:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:09.031 15:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:09.031 15:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:09.031 15:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:09.032 15:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:09.032 15:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:09.032 15:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:09.032 15:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:09.032 15:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:09.032 15:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:09.032 15:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:09.032 15:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.032 15:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.032 15:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.032 15:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:09.032 "name": "Existed_Raid", 00:09:09.032 "uuid": "9d7c54bd-e5fd-42cb-90b9-11f46ab58f8a", 00:09:09.032 "strip_size_kb": 64, 00:09:09.032 "state": "configuring", 00:09:09.032 "raid_level": "concat", 00:09:09.032 "superblock": true, 00:09:09.032 "num_base_bdevs": 3, 00:09:09.032 "num_base_bdevs_discovered": 1, 00:09:09.032 "num_base_bdevs_operational": 3, 00:09:09.032 "base_bdevs_list": [ 00:09:09.032 { 00:09:09.032 "name": "BaseBdev1", 00:09:09.032 "uuid": "52af21d9-92e7-4a54-9b6c-ed01d36639a6", 00:09:09.032 "is_configured": true, 00:09:09.032 "data_offset": 2048, 00:09:09.032 "data_size": 63488 00:09:09.032 }, 00:09:09.032 { 00:09:09.032 "name": "BaseBdev2", 00:09:09.032 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:09.032 "is_configured": false, 00:09:09.032 "data_offset": 0, 00:09:09.032 "data_size": 0 00:09:09.032 }, 00:09:09.032 { 00:09:09.032 "name": "BaseBdev3", 00:09:09.032 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:09.032 "is_configured": false, 00:09:09.032 "data_offset": 0, 00:09:09.032 "data_size": 0 00:09:09.032 } 00:09:09.032 ] 00:09:09.032 }' 00:09:09.032 15:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:09.032 15:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.615 15:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:09.615 15:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.615 15:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.615 [2024-11-20 15:16:55.853541] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:09.615 [2024-11-20 15:16:55.853592] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:09.615 15:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.615 15:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:09.615 15:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.615 15:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.615 [2024-11-20 15:16:55.865576] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:09.615 [2024-11-20 15:16:55.867704] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:09.615 [2024-11-20 15:16:55.867753] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:09.615 [2024-11-20 15:16:55.867764] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:09.615 [2024-11-20 15:16:55.867776] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:09.615 15:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.615 15:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:09.615 15:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:09.615 15:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:09.615 15:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:09.615 15:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:09.615 15:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:09.615 15:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:09.615 15:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:09.615 15:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:09.615 15:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:09.615 15:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:09.615 15:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:09.615 15:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:09.615 15:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:09.615 15:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.615 15:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.615 15:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.615 15:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:09.615 "name": "Existed_Raid", 00:09:09.615 "uuid": "827785c5-938e-48bf-bee5-a7ecff49ec15", 00:09:09.615 "strip_size_kb": 64, 00:09:09.615 "state": "configuring", 00:09:09.615 "raid_level": "concat", 00:09:09.615 "superblock": true, 00:09:09.615 "num_base_bdevs": 3, 00:09:09.615 "num_base_bdevs_discovered": 1, 00:09:09.615 "num_base_bdevs_operational": 3, 00:09:09.615 "base_bdevs_list": [ 00:09:09.615 { 00:09:09.615 "name": "BaseBdev1", 00:09:09.615 "uuid": "52af21d9-92e7-4a54-9b6c-ed01d36639a6", 00:09:09.615 "is_configured": true, 00:09:09.615 "data_offset": 2048, 00:09:09.615 "data_size": 63488 00:09:09.615 }, 00:09:09.615 { 00:09:09.615 "name": "BaseBdev2", 00:09:09.615 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:09.615 "is_configured": false, 00:09:09.615 "data_offset": 0, 00:09:09.615 "data_size": 0 00:09:09.615 }, 00:09:09.615 { 00:09:09.615 "name": "BaseBdev3", 00:09:09.615 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:09.615 "is_configured": false, 00:09:09.615 "data_offset": 0, 00:09:09.615 "data_size": 0 00:09:09.615 } 00:09:09.615 ] 00:09:09.615 }' 00:09:09.615 15:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:09.615 15:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.892 15:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:09.892 15:16:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.892 15:16:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.892 [2024-11-20 15:16:56.327355] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:09.892 BaseBdev2 00:09:09.892 15:16:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.892 15:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:09.892 15:16:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:09.892 15:16:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:09.892 15:16:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:09.892 15:16:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:09.892 15:16:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:09.892 15:16:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:09.892 15:16:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.892 15:16:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.892 15:16:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.892 15:16:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:09.892 15:16:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.892 15:16:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.892 [ 00:09:09.892 { 00:09:09.892 "name": "BaseBdev2", 00:09:09.892 "aliases": [ 00:09:09.892 "22f22c4f-2a07-4c7c-8459-2fdd3afee478" 00:09:09.892 ], 00:09:09.892 "product_name": "Malloc disk", 00:09:09.892 "block_size": 512, 00:09:09.892 "num_blocks": 65536, 00:09:09.892 "uuid": "22f22c4f-2a07-4c7c-8459-2fdd3afee478", 00:09:09.892 "assigned_rate_limits": { 00:09:09.892 "rw_ios_per_sec": 0, 00:09:09.892 "rw_mbytes_per_sec": 0, 00:09:09.892 "r_mbytes_per_sec": 0, 00:09:09.892 "w_mbytes_per_sec": 0 00:09:09.892 }, 00:09:09.892 "claimed": true, 00:09:09.892 "claim_type": "exclusive_write", 00:09:09.892 "zoned": false, 00:09:09.892 "supported_io_types": { 00:09:09.892 "read": true, 00:09:09.892 "write": true, 00:09:09.892 "unmap": true, 00:09:09.892 "flush": true, 00:09:09.892 "reset": true, 00:09:09.892 "nvme_admin": false, 00:09:09.892 "nvme_io": false, 00:09:09.892 "nvme_io_md": false, 00:09:09.892 "write_zeroes": true, 00:09:09.892 "zcopy": true, 00:09:09.892 "get_zone_info": false, 00:09:09.892 "zone_management": false, 00:09:09.892 "zone_append": false, 00:09:09.892 "compare": false, 00:09:09.892 "compare_and_write": false, 00:09:09.892 "abort": true, 00:09:09.892 "seek_hole": false, 00:09:09.892 "seek_data": false, 00:09:09.892 "copy": true, 00:09:09.892 "nvme_iov_md": false 00:09:09.892 }, 00:09:09.892 "memory_domains": [ 00:09:09.892 { 00:09:09.892 "dma_device_id": "system", 00:09:09.892 "dma_device_type": 1 00:09:09.892 }, 00:09:09.892 { 00:09:09.892 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:09.892 "dma_device_type": 2 00:09:09.892 } 00:09:09.892 ], 00:09:09.892 "driver_specific": {} 00:09:09.892 } 00:09:09.892 ] 00:09:10.151 15:16:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.151 15:16:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:10.151 15:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:10.151 15:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:10.151 15:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:10.151 15:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:10.151 15:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:10.151 15:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:10.151 15:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:10.151 15:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:10.151 15:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:10.151 15:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:10.151 15:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:10.151 15:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:10.151 15:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:10.151 15:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:10.151 15:16:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.151 15:16:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.151 15:16:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.151 15:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:10.151 "name": "Existed_Raid", 00:09:10.151 "uuid": "827785c5-938e-48bf-bee5-a7ecff49ec15", 00:09:10.151 "strip_size_kb": 64, 00:09:10.151 "state": "configuring", 00:09:10.151 "raid_level": "concat", 00:09:10.151 "superblock": true, 00:09:10.151 "num_base_bdevs": 3, 00:09:10.151 "num_base_bdevs_discovered": 2, 00:09:10.151 "num_base_bdevs_operational": 3, 00:09:10.151 "base_bdevs_list": [ 00:09:10.151 { 00:09:10.151 "name": "BaseBdev1", 00:09:10.151 "uuid": "52af21d9-92e7-4a54-9b6c-ed01d36639a6", 00:09:10.151 "is_configured": true, 00:09:10.151 "data_offset": 2048, 00:09:10.151 "data_size": 63488 00:09:10.151 }, 00:09:10.151 { 00:09:10.151 "name": "BaseBdev2", 00:09:10.151 "uuid": "22f22c4f-2a07-4c7c-8459-2fdd3afee478", 00:09:10.151 "is_configured": true, 00:09:10.151 "data_offset": 2048, 00:09:10.151 "data_size": 63488 00:09:10.151 }, 00:09:10.151 { 00:09:10.151 "name": "BaseBdev3", 00:09:10.151 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:10.151 "is_configured": false, 00:09:10.151 "data_offset": 0, 00:09:10.151 "data_size": 0 00:09:10.151 } 00:09:10.151 ] 00:09:10.151 }' 00:09:10.151 15:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:10.151 15:16:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.410 15:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:10.410 15:16:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.410 15:16:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.410 [2024-11-20 15:16:56.886746] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:10.410 [2024-11-20 15:16:56.887002] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:10.410 [2024-11-20 15:16:56.887025] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:10.410 [2024-11-20 15:16:56.887315] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:10.411 [2024-11-20 15:16:56.887466] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:10.411 [2024-11-20 15:16:56.887476] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:10.411 BaseBdev3 00:09:10.411 [2024-11-20 15:16:56.887631] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:10.411 15:16:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.411 15:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:10.411 15:16:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:10.411 15:16:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:10.411 15:16:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:10.411 15:16:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:10.411 15:16:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:10.411 15:16:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:10.411 15:16:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.411 15:16:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.670 15:16:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.670 15:16:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:10.670 15:16:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.670 15:16:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.670 [ 00:09:10.670 { 00:09:10.670 "name": "BaseBdev3", 00:09:10.670 "aliases": [ 00:09:10.670 "63b4b6f3-7adc-4813-a7c6-13824c25ed23" 00:09:10.670 ], 00:09:10.670 "product_name": "Malloc disk", 00:09:10.670 "block_size": 512, 00:09:10.670 "num_blocks": 65536, 00:09:10.670 "uuid": "63b4b6f3-7adc-4813-a7c6-13824c25ed23", 00:09:10.670 "assigned_rate_limits": { 00:09:10.670 "rw_ios_per_sec": 0, 00:09:10.670 "rw_mbytes_per_sec": 0, 00:09:10.670 "r_mbytes_per_sec": 0, 00:09:10.670 "w_mbytes_per_sec": 0 00:09:10.670 }, 00:09:10.670 "claimed": true, 00:09:10.670 "claim_type": "exclusive_write", 00:09:10.670 "zoned": false, 00:09:10.670 "supported_io_types": { 00:09:10.670 "read": true, 00:09:10.670 "write": true, 00:09:10.670 "unmap": true, 00:09:10.670 "flush": true, 00:09:10.670 "reset": true, 00:09:10.670 "nvme_admin": false, 00:09:10.670 "nvme_io": false, 00:09:10.670 "nvme_io_md": false, 00:09:10.670 "write_zeroes": true, 00:09:10.670 "zcopy": true, 00:09:10.670 "get_zone_info": false, 00:09:10.670 "zone_management": false, 00:09:10.670 "zone_append": false, 00:09:10.670 "compare": false, 00:09:10.670 "compare_and_write": false, 00:09:10.670 "abort": true, 00:09:10.670 "seek_hole": false, 00:09:10.670 "seek_data": false, 00:09:10.670 "copy": true, 00:09:10.670 "nvme_iov_md": false 00:09:10.670 }, 00:09:10.670 "memory_domains": [ 00:09:10.670 { 00:09:10.670 "dma_device_id": "system", 00:09:10.670 "dma_device_type": 1 00:09:10.670 }, 00:09:10.670 { 00:09:10.670 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:10.670 "dma_device_type": 2 00:09:10.670 } 00:09:10.670 ], 00:09:10.670 "driver_specific": {} 00:09:10.670 } 00:09:10.670 ] 00:09:10.670 15:16:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.670 15:16:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:10.670 15:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:10.670 15:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:10.670 15:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:09:10.670 15:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:10.670 15:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:10.670 15:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:10.670 15:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:10.670 15:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:10.670 15:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:10.670 15:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:10.670 15:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:10.670 15:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:10.670 15:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:10.670 15:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:10.670 15:16:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.670 15:16:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.670 15:16:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.670 15:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:10.670 "name": "Existed_Raid", 00:09:10.670 "uuid": "827785c5-938e-48bf-bee5-a7ecff49ec15", 00:09:10.670 "strip_size_kb": 64, 00:09:10.670 "state": "online", 00:09:10.670 "raid_level": "concat", 00:09:10.670 "superblock": true, 00:09:10.670 "num_base_bdevs": 3, 00:09:10.670 "num_base_bdevs_discovered": 3, 00:09:10.670 "num_base_bdevs_operational": 3, 00:09:10.670 "base_bdevs_list": [ 00:09:10.670 { 00:09:10.670 "name": "BaseBdev1", 00:09:10.670 "uuid": "52af21d9-92e7-4a54-9b6c-ed01d36639a6", 00:09:10.670 "is_configured": true, 00:09:10.670 "data_offset": 2048, 00:09:10.670 "data_size": 63488 00:09:10.670 }, 00:09:10.670 { 00:09:10.670 "name": "BaseBdev2", 00:09:10.670 "uuid": "22f22c4f-2a07-4c7c-8459-2fdd3afee478", 00:09:10.670 "is_configured": true, 00:09:10.670 "data_offset": 2048, 00:09:10.670 "data_size": 63488 00:09:10.670 }, 00:09:10.670 { 00:09:10.670 "name": "BaseBdev3", 00:09:10.670 "uuid": "63b4b6f3-7adc-4813-a7c6-13824c25ed23", 00:09:10.670 "is_configured": true, 00:09:10.670 "data_offset": 2048, 00:09:10.670 "data_size": 63488 00:09:10.670 } 00:09:10.670 ] 00:09:10.670 }' 00:09:10.670 15:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:10.671 15:16:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.929 15:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:10.929 15:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:10.929 15:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:10.929 15:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:10.929 15:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:10.929 15:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:10.929 15:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:10.929 15:16:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.929 15:16:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.929 15:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:10.929 [2024-11-20 15:16:57.350408] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:10.929 15:16:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.929 15:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:10.929 "name": "Existed_Raid", 00:09:10.929 "aliases": [ 00:09:10.929 "827785c5-938e-48bf-bee5-a7ecff49ec15" 00:09:10.929 ], 00:09:10.929 "product_name": "Raid Volume", 00:09:10.929 "block_size": 512, 00:09:10.929 "num_blocks": 190464, 00:09:10.929 "uuid": "827785c5-938e-48bf-bee5-a7ecff49ec15", 00:09:10.929 "assigned_rate_limits": { 00:09:10.929 "rw_ios_per_sec": 0, 00:09:10.929 "rw_mbytes_per_sec": 0, 00:09:10.929 "r_mbytes_per_sec": 0, 00:09:10.929 "w_mbytes_per_sec": 0 00:09:10.929 }, 00:09:10.929 "claimed": false, 00:09:10.929 "zoned": false, 00:09:10.929 "supported_io_types": { 00:09:10.929 "read": true, 00:09:10.929 "write": true, 00:09:10.929 "unmap": true, 00:09:10.929 "flush": true, 00:09:10.929 "reset": true, 00:09:10.929 "nvme_admin": false, 00:09:10.929 "nvme_io": false, 00:09:10.929 "nvme_io_md": false, 00:09:10.929 "write_zeroes": true, 00:09:10.930 "zcopy": false, 00:09:10.930 "get_zone_info": false, 00:09:10.930 "zone_management": false, 00:09:10.930 "zone_append": false, 00:09:10.930 "compare": false, 00:09:10.930 "compare_and_write": false, 00:09:10.930 "abort": false, 00:09:10.930 "seek_hole": false, 00:09:10.930 "seek_data": false, 00:09:10.930 "copy": false, 00:09:10.930 "nvme_iov_md": false 00:09:10.930 }, 00:09:10.930 "memory_domains": [ 00:09:10.930 { 00:09:10.930 "dma_device_id": "system", 00:09:10.930 "dma_device_type": 1 00:09:10.930 }, 00:09:10.930 { 00:09:10.930 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:10.930 "dma_device_type": 2 00:09:10.930 }, 00:09:10.930 { 00:09:10.930 "dma_device_id": "system", 00:09:10.930 "dma_device_type": 1 00:09:10.930 }, 00:09:10.930 { 00:09:10.930 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:10.930 "dma_device_type": 2 00:09:10.930 }, 00:09:10.930 { 00:09:10.930 "dma_device_id": "system", 00:09:10.930 "dma_device_type": 1 00:09:10.930 }, 00:09:10.930 { 00:09:10.930 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:10.930 "dma_device_type": 2 00:09:10.930 } 00:09:10.930 ], 00:09:10.930 "driver_specific": { 00:09:10.930 "raid": { 00:09:10.930 "uuid": "827785c5-938e-48bf-bee5-a7ecff49ec15", 00:09:10.930 "strip_size_kb": 64, 00:09:10.930 "state": "online", 00:09:10.930 "raid_level": "concat", 00:09:10.930 "superblock": true, 00:09:10.930 "num_base_bdevs": 3, 00:09:10.930 "num_base_bdevs_discovered": 3, 00:09:10.930 "num_base_bdevs_operational": 3, 00:09:10.930 "base_bdevs_list": [ 00:09:10.930 { 00:09:10.930 "name": "BaseBdev1", 00:09:10.930 "uuid": "52af21d9-92e7-4a54-9b6c-ed01d36639a6", 00:09:10.930 "is_configured": true, 00:09:10.930 "data_offset": 2048, 00:09:10.930 "data_size": 63488 00:09:10.930 }, 00:09:10.930 { 00:09:10.930 "name": "BaseBdev2", 00:09:10.930 "uuid": "22f22c4f-2a07-4c7c-8459-2fdd3afee478", 00:09:10.930 "is_configured": true, 00:09:10.930 "data_offset": 2048, 00:09:10.930 "data_size": 63488 00:09:10.930 }, 00:09:10.930 { 00:09:10.930 "name": "BaseBdev3", 00:09:10.930 "uuid": "63b4b6f3-7adc-4813-a7c6-13824c25ed23", 00:09:10.930 "is_configured": true, 00:09:10.930 "data_offset": 2048, 00:09:10.930 "data_size": 63488 00:09:10.930 } 00:09:10.930 ] 00:09:10.930 } 00:09:10.930 } 00:09:10.930 }' 00:09:10.930 15:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:11.188 15:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:11.188 BaseBdev2 00:09:11.188 BaseBdev3' 00:09:11.188 15:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:11.188 15:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:11.188 15:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:11.188 15:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:11.188 15:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:11.188 15:16:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.188 15:16:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.188 15:16:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.188 15:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:11.188 15:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:11.188 15:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:11.188 15:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:11.188 15:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:11.188 15:16:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.188 15:16:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.188 15:16:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.188 15:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:11.188 15:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:11.188 15:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:11.188 15:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:11.188 15:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:11.188 15:16:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.188 15:16:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.188 15:16:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.188 15:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:11.188 15:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:11.188 15:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:11.188 15:16:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.188 15:16:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.188 [2024-11-20 15:16:57.613827] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:11.188 [2024-11-20 15:16:57.613855] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:11.188 [2024-11-20 15:16:57.613910] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:11.446 15:16:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.446 15:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:11.446 15:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:09:11.446 15:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:11.446 15:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:09:11.446 15:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:11.446 15:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:09:11.446 15:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:11.446 15:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:11.446 15:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:11.446 15:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:11.446 15:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:11.446 15:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:11.446 15:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:11.446 15:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:11.446 15:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:11.446 15:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:11.446 15:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:11.446 15:16:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.446 15:16:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.446 15:16:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.446 15:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:11.446 "name": "Existed_Raid", 00:09:11.446 "uuid": "827785c5-938e-48bf-bee5-a7ecff49ec15", 00:09:11.446 "strip_size_kb": 64, 00:09:11.446 "state": "offline", 00:09:11.446 "raid_level": "concat", 00:09:11.446 "superblock": true, 00:09:11.446 "num_base_bdevs": 3, 00:09:11.446 "num_base_bdevs_discovered": 2, 00:09:11.446 "num_base_bdevs_operational": 2, 00:09:11.446 "base_bdevs_list": [ 00:09:11.446 { 00:09:11.446 "name": null, 00:09:11.446 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:11.446 "is_configured": false, 00:09:11.446 "data_offset": 0, 00:09:11.446 "data_size": 63488 00:09:11.446 }, 00:09:11.446 { 00:09:11.446 "name": "BaseBdev2", 00:09:11.446 "uuid": "22f22c4f-2a07-4c7c-8459-2fdd3afee478", 00:09:11.446 "is_configured": true, 00:09:11.446 "data_offset": 2048, 00:09:11.446 "data_size": 63488 00:09:11.446 }, 00:09:11.446 { 00:09:11.446 "name": "BaseBdev3", 00:09:11.446 "uuid": "63b4b6f3-7adc-4813-a7c6-13824c25ed23", 00:09:11.446 "is_configured": true, 00:09:11.446 "data_offset": 2048, 00:09:11.446 "data_size": 63488 00:09:11.446 } 00:09:11.446 ] 00:09:11.446 }' 00:09:11.446 15:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:11.446 15:16:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.705 15:16:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:11.705 15:16:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:11.705 15:16:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:11.705 15:16:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.705 15:16:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.705 15:16:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:11.705 15:16:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.705 15:16:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:11.705 15:16:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:11.705 15:16:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:11.705 15:16:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.705 15:16:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.705 [2024-11-20 15:16:58.171251] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:11.964 15:16:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.964 15:16:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:11.964 15:16:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:11.964 15:16:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:11.964 15:16:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:11.964 15:16:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.964 15:16:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.964 15:16:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.964 15:16:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:11.964 15:16:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:11.965 15:16:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:11.965 15:16:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.965 15:16:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.965 [2024-11-20 15:16:58.316592] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:11.965 [2024-11-20 15:16:58.316773] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:11.965 15:16:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.965 15:16:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:11.965 15:16:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:11.965 15:16:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:11.965 15:16:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:11.965 15:16:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.965 15:16:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.965 15:16:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.223 15:16:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:12.223 15:16:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:12.223 15:16:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:09:12.223 15:16:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:12.223 15:16:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:12.223 15:16:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:12.223 15:16:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.223 15:16:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.223 BaseBdev2 00:09:12.223 15:16:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.223 15:16:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:12.223 15:16:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:12.223 15:16:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:12.223 15:16:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:12.223 15:16:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:12.223 15:16:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:12.223 15:16:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:12.223 15:16:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.223 15:16:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.223 15:16:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.223 15:16:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:12.223 15:16:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.223 15:16:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.223 [ 00:09:12.223 { 00:09:12.223 "name": "BaseBdev2", 00:09:12.223 "aliases": [ 00:09:12.223 "09275f3b-48cd-48a4-b666-4f77b28319da" 00:09:12.223 ], 00:09:12.223 "product_name": "Malloc disk", 00:09:12.223 "block_size": 512, 00:09:12.223 "num_blocks": 65536, 00:09:12.223 "uuid": "09275f3b-48cd-48a4-b666-4f77b28319da", 00:09:12.223 "assigned_rate_limits": { 00:09:12.223 "rw_ios_per_sec": 0, 00:09:12.223 "rw_mbytes_per_sec": 0, 00:09:12.223 "r_mbytes_per_sec": 0, 00:09:12.223 "w_mbytes_per_sec": 0 00:09:12.223 }, 00:09:12.223 "claimed": false, 00:09:12.223 "zoned": false, 00:09:12.223 "supported_io_types": { 00:09:12.223 "read": true, 00:09:12.223 "write": true, 00:09:12.223 "unmap": true, 00:09:12.223 "flush": true, 00:09:12.223 "reset": true, 00:09:12.223 "nvme_admin": false, 00:09:12.223 "nvme_io": false, 00:09:12.223 "nvme_io_md": false, 00:09:12.223 "write_zeroes": true, 00:09:12.223 "zcopy": true, 00:09:12.223 "get_zone_info": false, 00:09:12.223 "zone_management": false, 00:09:12.223 "zone_append": false, 00:09:12.223 "compare": false, 00:09:12.223 "compare_and_write": false, 00:09:12.223 "abort": true, 00:09:12.223 "seek_hole": false, 00:09:12.223 "seek_data": false, 00:09:12.223 "copy": true, 00:09:12.223 "nvme_iov_md": false 00:09:12.223 }, 00:09:12.223 "memory_domains": [ 00:09:12.223 { 00:09:12.223 "dma_device_id": "system", 00:09:12.223 "dma_device_type": 1 00:09:12.223 }, 00:09:12.223 { 00:09:12.223 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:12.223 "dma_device_type": 2 00:09:12.223 } 00:09:12.223 ], 00:09:12.223 "driver_specific": {} 00:09:12.223 } 00:09:12.223 ] 00:09:12.223 15:16:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.223 15:16:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:12.223 15:16:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:12.223 15:16:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:12.223 15:16:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:12.223 15:16:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.223 15:16:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.223 BaseBdev3 00:09:12.223 15:16:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.223 15:16:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:12.223 15:16:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:12.223 15:16:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:12.223 15:16:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:12.223 15:16:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:12.223 15:16:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:12.223 15:16:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:12.223 15:16:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.223 15:16:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.223 15:16:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.223 15:16:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:12.223 15:16:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.223 15:16:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.223 [ 00:09:12.223 { 00:09:12.223 "name": "BaseBdev3", 00:09:12.223 "aliases": [ 00:09:12.223 "fe32b40a-1dc1-49ba-9273-92231f46e630" 00:09:12.223 ], 00:09:12.223 "product_name": "Malloc disk", 00:09:12.223 "block_size": 512, 00:09:12.223 "num_blocks": 65536, 00:09:12.223 "uuid": "fe32b40a-1dc1-49ba-9273-92231f46e630", 00:09:12.223 "assigned_rate_limits": { 00:09:12.223 "rw_ios_per_sec": 0, 00:09:12.223 "rw_mbytes_per_sec": 0, 00:09:12.223 "r_mbytes_per_sec": 0, 00:09:12.223 "w_mbytes_per_sec": 0 00:09:12.223 }, 00:09:12.223 "claimed": false, 00:09:12.223 "zoned": false, 00:09:12.223 "supported_io_types": { 00:09:12.223 "read": true, 00:09:12.223 "write": true, 00:09:12.223 "unmap": true, 00:09:12.223 "flush": true, 00:09:12.223 "reset": true, 00:09:12.223 "nvme_admin": false, 00:09:12.223 "nvme_io": false, 00:09:12.223 "nvme_io_md": false, 00:09:12.223 "write_zeroes": true, 00:09:12.223 "zcopy": true, 00:09:12.223 "get_zone_info": false, 00:09:12.223 "zone_management": false, 00:09:12.223 "zone_append": false, 00:09:12.223 "compare": false, 00:09:12.223 "compare_and_write": false, 00:09:12.223 "abort": true, 00:09:12.223 "seek_hole": false, 00:09:12.223 "seek_data": false, 00:09:12.223 "copy": true, 00:09:12.223 "nvme_iov_md": false 00:09:12.223 }, 00:09:12.223 "memory_domains": [ 00:09:12.223 { 00:09:12.223 "dma_device_id": "system", 00:09:12.223 "dma_device_type": 1 00:09:12.223 }, 00:09:12.223 { 00:09:12.223 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:12.223 "dma_device_type": 2 00:09:12.223 } 00:09:12.223 ], 00:09:12.223 "driver_specific": {} 00:09:12.223 } 00:09:12.223 ] 00:09:12.223 15:16:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.223 15:16:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:12.223 15:16:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:12.223 15:16:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:12.223 15:16:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:12.223 15:16:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.223 15:16:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.223 [2024-11-20 15:16:58.643145] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:12.223 [2024-11-20 15:16:58.643194] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:12.223 [2024-11-20 15:16:58.643218] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:12.223 [2024-11-20 15:16:58.645265] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:12.223 15:16:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.223 15:16:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:12.223 15:16:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:12.223 15:16:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:12.223 15:16:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:12.223 15:16:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:12.223 15:16:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:12.223 15:16:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:12.223 15:16:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:12.223 15:16:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:12.223 15:16:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:12.223 15:16:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:12.223 15:16:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.223 15:16:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.223 15:16:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:12.223 15:16:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.223 15:16:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:12.223 "name": "Existed_Raid", 00:09:12.223 "uuid": "9779aa70-86bb-4275-8589-ca2da2b80467", 00:09:12.223 "strip_size_kb": 64, 00:09:12.223 "state": "configuring", 00:09:12.223 "raid_level": "concat", 00:09:12.223 "superblock": true, 00:09:12.223 "num_base_bdevs": 3, 00:09:12.223 "num_base_bdevs_discovered": 2, 00:09:12.223 "num_base_bdevs_operational": 3, 00:09:12.223 "base_bdevs_list": [ 00:09:12.223 { 00:09:12.223 "name": "BaseBdev1", 00:09:12.223 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:12.223 "is_configured": false, 00:09:12.223 "data_offset": 0, 00:09:12.223 "data_size": 0 00:09:12.223 }, 00:09:12.223 { 00:09:12.223 "name": "BaseBdev2", 00:09:12.223 "uuid": "09275f3b-48cd-48a4-b666-4f77b28319da", 00:09:12.223 "is_configured": true, 00:09:12.223 "data_offset": 2048, 00:09:12.223 "data_size": 63488 00:09:12.223 }, 00:09:12.223 { 00:09:12.223 "name": "BaseBdev3", 00:09:12.223 "uuid": "fe32b40a-1dc1-49ba-9273-92231f46e630", 00:09:12.223 "is_configured": true, 00:09:12.223 "data_offset": 2048, 00:09:12.223 "data_size": 63488 00:09:12.223 } 00:09:12.223 ] 00:09:12.223 }' 00:09:12.223 15:16:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:12.223 15:16:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.790 15:16:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:12.790 15:16:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.790 15:16:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.790 [2024-11-20 15:16:59.058673] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:12.790 15:16:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.790 15:16:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:12.790 15:16:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:12.790 15:16:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:12.790 15:16:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:12.790 15:16:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:12.790 15:16:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:12.790 15:16:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:12.790 15:16:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:12.791 15:16:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:12.791 15:16:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:12.791 15:16:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:12.791 15:16:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:12.791 15:16:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.791 15:16:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.791 15:16:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.791 15:16:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:12.791 "name": "Existed_Raid", 00:09:12.791 "uuid": "9779aa70-86bb-4275-8589-ca2da2b80467", 00:09:12.791 "strip_size_kb": 64, 00:09:12.791 "state": "configuring", 00:09:12.791 "raid_level": "concat", 00:09:12.791 "superblock": true, 00:09:12.791 "num_base_bdevs": 3, 00:09:12.791 "num_base_bdevs_discovered": 1, 00:09:12.791 "num_base_bdevs_operational": 3, 00:09:12.791 "base_bdevs_list": [ 00:09:12.791 { 00:09:12.791 "name": "BaseBdev1", 00:09:12.791 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:12.791 "is_configured": false, 00:09:12.791 "data_offset": 0, 00:09:12.791 "data_size": 0 00:09:12.791 }, 00:09:12.791 { 00:09:12.791 "name": null, 00:09:12.791 "uuid": "09275f3b-48cd-48a4-b666-4f77b28319da", 00:09:12.791 "is_configured": false, 00:09:12.791 "data_offset": 0, 00:09:12.791 "data_size": 63488 00:09:12.791 }, 00:09:12.791 { 00:09:12.791 "name": "BaseBdev3", 00:09:12.791 "uuid": "fe32b40a-1dc1-49ba-9273-92231f46e630", 00:09:12.791 "is_configured": true, 00:09:12.791 "data_offset": 2048, 00:09:12.791 "data_size": 63488 00:09:12.791 } 00:09:12.791 ] 00:09:12.791 }' 00:09:12.791 15:16:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:12.791 15:16:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.049 15:16:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:13.049 15:16:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.049 15:16:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.049 15:16:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:13.049 15:16:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.308 15:16:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:13.308 15:16:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:13.308 15:16:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.308 15:16:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.308 [2024-11-20 15:16:59.572619] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:13.308 BaseBdev1 00:09:13.308 15:16:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.308 15:16:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:13.309 15:16:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:13.309 15:16:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:13.309 15:16:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:13.309 15:16:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:13.309 15:16:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:13.309 15:16:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:13.309 15:16:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.309 15:16:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.309 15:16:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.309 15:16:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:13.309 15:16:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.309 15:16:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.309 [ 00:09:13.309 { 00:09:13.309 "name": "BaseBdev1", 00:09:13.309 "aliases": [ 00:09:13.309 "89464b6f-f562-49c2-b67d-d49442949d39" 00:09:13.309 ], 00:09:13.309 "product_name": "Malloc disk", 00:09:13.309 "block_size": 512, 00:09:13.309 "num_blocks": 65536, 00:09:13.309 "uuid": "89464b6f-f562-49c2-b67d-d49442949d39", 00:09:13.309 "assigned_rate_limits": { 00:09:13.309 "rw_ios_per_sec": 0, 00:09:13.309 "rw_mbytes_per_sec": 0, 00:09:13.309 "r_mbytes_per_sec": 0, 00:09:13.309 "w_mbytes_per_sec": 0 00:09:13.309 }, 00:09:13.309 "claimed": true, 00:09:13.309 "claim_type": "exclusive_write", 00:09:13.309 "zoned": false, 00:09:13.309 "supported_io_types": { 00:09:13.309 "read": true, 00:09:13.309 "write": true, 00:09:13.309 "unmap": true, 00:09:13.309 "flush": true, 00:09:13.309 "reset": true, 00:09:13.309 "nvme_admin": false, 00:09:13.309 "nvme_io": false, 00:09:13.309 "nvme_io_md": false, 00:09:13.309 "write_zeroes": true, 00:09:13.309 "zcopy": true, 00:09:13.309 "get_zone_info": false, 00:09:13.309 "zone_management": false, 00:09:13.309 "zone_append": false, 00:09:13.309 "compare": false, 00:09:13.309 "compare_and_write": false, 00:09:13.309 "abort": true, 00:09:13.309 "seek_hole": false, 00:09:13.309 "seek_data": false, 00:09:13.309 "copy": true, 00:09:13.309 "nvme_iov_md": false 00:09:13.309 }, 00:09:13.309 "memory_domains": [ 00:09:13.309 { 00:09:13.309 "dma_device_id": "system", 00:09:13.309 "dma_device_type": 1 00:09:13.309 }, 00:09:13.309 { 00:09:13.309 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:13.309 "dma_device_type": 2 00:09:13.309 } 00:09:13.309 ], 00:09:13.309 "driver_specific": {} 00:09:13.309 } 00:09:13.309 ] 00:09:13.309 15:16:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.309 15:16:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:13.309 15:16:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:13.309 15:16:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:13.309 15:16:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:13.309 15:16:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:13.309 15:16:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:13.309 15:16:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:13.309 15:16:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:13.309 15:16:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:13.309 15:16:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:13.309 15:16:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:13.309 15:16:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:13.309 15:16:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.309 15:16:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:13.309 15:16:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.309 15:16:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.309 15:16:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:13.309 "name": "Existed_Raid", 00:09:13.309 "uuid": "9779aa70-86bb-4275-8589-ca2da2b80467", 00:09:13.309 "strip_size_kb": 64, 00:09:13.309 "state": "configuring", 00:09:13.309 "raid_level": "concat", 00:09:13.309 "superblock": true, 00:09:13.309 "num_base_bdevs": 3, 00:09:13.309 "num_base_bdevs_discovered": 2, 00:09:13.309 "num_base_bdevs_operational": 3, 00:09:13.309 "base_bdevs_list": [ 00:09:13.309 { 00:09:13.309 "name": "BaseBdev1", 00:09:13.309 "uuid": "89464b6f-f562-49c2-b67d-d49442949d39", 00:09:13.309 "is_configured": true, 00:09:13.309 "data_offset": 2048, 00:09:13.309 "data_size": 63488 00:09:13.309 }, 00:09:13.309 { 00:09:13.309 "name": null, 00:09:13.309 "uuid": "09275f3b-48cd-48a4-b666-4f77b28319da", 00:09:13.309 "is_configured": false, 00:09:13.309 "data_offset": 0, 00:09:13.309 "data_size": 63488 00:09:13.309 }, 00:09:13.309 { 00:09:13.309 "name": "BaseBdev3", 00:09:13.309 "uuid": "fe32b40a-1dc1-49ba-9273-92231f46e630", 00:09:13.309 "is_configured": true, 00:09:13.309 "data_offset": 2048, 00:09:13.309 "data_size": 63488 00:09:13.309 } 00:09:13.309 ] 00:09:13.309 }' 00:09:13.309 15:16:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:13.309 15:16:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.877 15:17:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:13.877 15:17:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:13.877 15:17:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.877 15:17:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.877 15:17:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.877 15:17:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:13.877 15:17:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:13.877 15:17:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.877 15:17:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.877 [2024-11-20 15:17:00.099937] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:13.877 15:17:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.877 15:17:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:13.877 15:17:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:13.877 15:17:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:13.877 15:17:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:13.877 15:17:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:13.877 15:17:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:13.877 15:17:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:13.877 15:17:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:13.877 15:17:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:13.877 15:17:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:13.877 15:17:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:13.877 15:17:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:13.877 15:17:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.877 15:17:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.877 15:17:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.877 15:17:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:13.877 "name": "Existed_Raid", 00:09:13.877 "uuid": "9779aa70-86bb-4275-8589-ca2da2b80467", 00:09:13.877 "strip_size_kb": 64, 00:09:13.877 "state": "configuring", 00:09:13.877 "raid_level": "concat", 00:09:13.877 "superblock": true, 00:09:13.877 "num_base_bdevs": 3, 00:09:13.877 "num_base_bdevs_discovered": 1, 00:09:13.877 "num_base_bdevs_operational": 3, 00:09:13.877 "base_bdevs_list": [ 00:09:13.877 { 00:09:13.877 "name": "BaseBdev1", 00:09:13.877 "uuid": "89464b6f-f562-49c2-b67d-d49442949d39", 00:09:13.877 "is_configured": true, 00:09:13.877 "data_offset": 2048, 00:09:13.877 "data_size": 63488 00:09:13.877 }, 00:09:13.877 { 00:09:13.878 "name": null, 00:09:13.878 "uuid": "09275f3b-48cd-48a4-b666-4f77b28319da", 00:09:13.878 "is_configured": false, 00:09:13.878 "data_offset": 0, 00:09:13.878 "data_size": 63488 00:09:13.878 }, 00:09:13.878 { 00:09:13.878 "name": null, 00:09:13.878 "uuid": "fe32b40a-1dc1-49ba-9273-92231f46e630", 00:09:13.878 "is_configured": false, 00:09:13.878 "data_offset": 0, 00:09:13.878 "data_size": 63488 00:09:13.878 } 00:09:13.878 ] 00:09:13.878 }' 00:09:13.878 15:17:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:13.878 15:17:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.137 15:17:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:14.137 15:17:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:14.137 15:17:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.137 15:17:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.137 15:17:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.137 15:17:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:14.137 15:17:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:14.137 15:17:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.137 15:17:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.137 [2024-11-20 15:17:00.507761] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:14.137 15:17:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.137 15:17:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:14.137 15:17:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:14.137 15:17:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:14.137 15:17:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:14.137 15:17:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:14.137 15:17:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:14.137 15:17:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:14.137 15:17:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:14.137 15:17:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:14.137 15:17:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:14.137 15:17:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:14.137 15:17:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:14.137 15:17:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.137 15:17:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.137 15:17:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.137 15:17:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:14.137 "name": "Existed_Raid", 00:09:14.137 "uuid": "9779aa70-86bb-4275-8589-ca2da2b80467", 00:09:14.137 "strip_size_kb": 64, 00:09:14.137 "state": "configuring", 00:09:14.137 "raid_level": "concat", 00:09:14.137 "superblock": true, 00:09:14.137 "num_base_bdevs": 3, 00:09:14.137 "num_base_bdevs_discovered": 2, 00:09:14.137 "num_base_bdevs_operational": 3, 00:09:14.137 "base_bdevs_list": [ 00:09:14.137 { 00:09:14.137 "name": "BaseBdev1", 00:09:14.137 "uuid": "89464b6f-f562-49c2-b67d-d49442949d39", 00:09:14.137 "is_configured": true, 00:09:14.137 "data_offset": 2048, 00:09:14.137 "data_size": 63488 00:09:14.137 }, 00:09:14.137 { 00:09:14.137 "name": null, 00:09:14.137 "uuid": "09275f3b-48cd-48a4-b666-4f77b28319da", 00:09:14.137 "is_configured": false, 00:09:14.137 "data_offset": 0, 00:09:14.137 "data_size": 63488 00:09:14.137 }, 00:09:14.137 { 00:09:14.137 "name": "BaseBdev3", 00:09:14.137 "uuid": "fe32b40a-1dc1-49ba-9273-92231f46e630", 00:09:14.137 "is_configured": true, 00:09:14.137 "data_offset": 2048, 00:09:14.137 "data_size": 63488 00:09:14.137 } 00:09:14.137 ] 00:09:14.137 }' 00:09:14.137 15:17:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:14.137 15:17:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.457 15:17:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:14.457 15:17:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:14.457 15:17:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.457 15:17:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.716 15:17:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.716 15:17:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:14.716 15:17:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:14.716 15:17:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.716 15:17:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.716 [2024-11-20 15:17:00.955233] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:14.716 15:17:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.716 15:17:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:14.716 15:17:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:14.716 15:17:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:14.716 15:17:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:14.716 15:17:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:14.716 15:17:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:14.716 15:17:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:14.716 15:17:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:14.716 15:17:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:14.716 15:17:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:14.716 15:17:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:14.716 15:17:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.716 15:17:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:14.716 15:17:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.716 15:17:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.716 15:17:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:14.716 "name": "Existed_Raid", 00:09:14.716 "uuid": "9779aa70-86bb-4275-8589-ca2da2b80467", 00:09:14.716 "strip_size_kb": 64, 00:09:14.716 "state": "configuring", 00:09:14.716 "raid_level": "concat", 00:09:14.716 "superblock": true, 00:09:14.716 "num_base_bdevs": 3, 00:09:14.716 "num_base_bdevs_discovered": 1, 00:09:14.716 "num_base_bdevs_operational": 3, 00:09:14.716 "base_bdevs_list": [ 00:09:14.716 { 00:09:14.716 "name": null, 00:09:14.716 "uuid": "89464b6f-f562-49c2-b67d-d49442949d39", 00:09:14.716 "is_configured": false, 00:09:14.716 "data_offset": 0, 00:09:14.716 "data_size": 63488 00:09:14.716 }, 00:09:14.716 { 00:09:14.716 "name": null, 00:09:14.716 "uuid": "09275f3b-48cd-48a4-b666-4f77b28319da", 00:09:14.716 "is_configured": false, 00:09:14.716 "data_offset": 0, 00:09:14.716 "data_size": 63488 00:09:14.716 }, 00:09:14.716 { 00:09:14.716 "name": "BaseBdev3", 00:09:14.716 "uuid": "fe32b40a-1dc1-49ba-9273-92231f46e630", 00:09:14.716 "is_configured": true, 00:09:14.716 "data_offset": 2048, 00:09:14.716 "data_size": 63488 00:09:14.716 } 00:09:14.716 ] 00:09:14.716 }' 00:09:14.716 15:17:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:14.716 15:17:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.286 15:17:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:15.286 15:17:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.286 15:17:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:15.286 15:17:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.286 15:17:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.286 15:17:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:15.286 15:17:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:15.286 15:17:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.286 15:17:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.286 [2024-11-20 15:17:01.579123] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:15.286 15:17:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.286 15:17:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:15.286 15:17:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:15.286 15:17:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:15.286 15:17:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:15.286 15:17:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:15.286 15:17:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:15.286 15:17:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:15.286 15:17:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:15.286 15:17:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:15.286 15:17:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:15.286 15:17:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:15.286 15:17:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.286 15:17:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.286 15:17:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:15.286 15:17:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.286 15:17:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:15.286 "name": "Existed_Raid", 00:09:15.286 "uuid": "9779aa70-86bb-4275-8589-ca2da2b80467", 00:09:15.286 "strip_size_kb": 64, 00:09:15.286 "state": "configuring", 00:09:15.286 "raid_level": "concat", 00:09:15.286 "superblock": true, 00:09:15.286 "num_base_bdevs": 3, 00:09:15.286 "num_base_bdevs_discovered": 2, 00:09:15.286 "num_base_bdevs_operational": 3, 00:09:15.286 "base_bdevs_list": [ 00:09:15.286 { 00:09:15.286 "name": null, 00:09:15.286 "uuid": "89464b6f-f562-49c2-b67d-d49442949d39", 00:09:15.286 "is_configured": false, 00:09:15.286 "data_offset": 0, 00:09:15.286 "data_size": 63488 00:09:15.286 }, 00:09:15.286 { 00:09:15.286 "name": "BaseBdev2", 00:09:15.286 "uuid": "09275f3b-48cd-48a4-b666-4f77b28319da", 00:09:15.286 "is_configured": true, 00:09:15.286 "data_offset": 2048, 00:09:15.286 "data_size": 63488 00:09:15.286 }, 00:09:15.286 { 00:09:15.286 "name": "BaseBdev3", 00:09:15.286 "uuid": "fe32b40a-1dc1-49ba-9273-92231f46e630", 00:09:15.286 "is_configured": true, 00:09:15.286 "data_offset": 2048, 00:09:15.286 "data_size": 63488 00:09:15.286 } 00:09:15.286 ] 00:09:15.286 }' 00:09:15.286 15:17:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:15.286 15:17:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.572 15:17:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:15.572 15:17:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:15.572 15:17:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.572 15:17:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.572 15:17:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.572 15:17:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:15.832 15:17:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:15.832 15:17:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.832 15:17:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.832 15:17:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:15.832 15:17:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.832 15:17:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 89464b6f-f562-49c2-b67d-d49442949d39 00:09:15.832 15:17:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.832 15:17:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.832 [2024-11-20 15:17:02.141309] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:15.832 [2024-11-20 15:17:02.141538] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:15.832 [2024-11-20 15:17:02.141558] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:15.832 [2024-11-20 15:17:02.141863] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:15.832 [2024-11-20 15:17:02.142003] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:15.832 [2024-11-20 15:17:02.142014] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:09:15.832 NewBaseBdev 00:09:15.832 [2024-11-20 15:17:02.142141] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:15.832 15:17:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.832 15:17:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:15.832 15:17:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:09:15.832 15:17:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:15.832 15:17:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:15.832 15:17:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:15.832 15:17:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:15.832 15:17:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:15.832 15:17:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.832 15:17:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.832 15:17:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.832 15:17:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:15.832 15:17:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.832 15:17:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.832 [ 00:09:15.832 { 00:09:15.832 "name": "NewBaseBdev", 00:09:15.832 "aliases": [ 00:09:15.832 "89464b6f-f562-49c2-b67d-d49442949d39" 00:09:15.832 ], 00:09:15.832 "product_name": "Malloc disk", 00:09:15.832 "block_size": 512, 00:09:15.832 "num_blocks": 65536, 00:09:15.832 "uuid": "89464b6f-f562-49c2-b67d-d49442949d39", 00:09:15.832 "assigned_rate_limits": { 00:09:15.832 "rw_ios_per_sec": 0, 00:09:15.832 "rw_mbytes_per_sec": 0, 00:09:15.832 "r_mbytes_per_sec": 0, 00:09:15.832 "w_mbytes_per_sec": 0 00:09:15.832 }, 00:09:15.832 "claimed": true, 00:09:15.832 "claim_type": "exclusive_write", 00:09:15.832 "zoned": false, 00:09:15.832 "supported_io_types": { 00:09:15.832 "read": true, 00:09:15.832 "write": true, 00:09:15.832 "unmap": true, 00:09:15.832 "flush": true, 00:09:15.832 "reset": true, 00:09:15.832 "nvme_admin": false, 00:09:15.832 "nvme_io": false, 00:09:15.832 "nvme_io_md": false, 00:09:15.832 "write_zeroes": true, 00:09:15.832 "zcopy": true, 00:09:15.832 "get_zone_info": false, 00:09:15.832 "zone_management": false, 00:09:15.832 "zone_append": false, 00:09:15.832 "compare": false, 00:09:15.832 "compare_and_write": false, 00:09:15.832 "abort": true, 00:09:15.832 "seek_hole": false, 00:09:15.832 "seek_data": false, 00:09:15.832 "copy": true, 00:09:15.832 "nvme_iov_md": false 00:09:15.832 }, 00:09:15.832 "memory_domains": [ 00:09:15.832 { 00:09:15.832 "dma_device_id": "system", 00:09:15.832 "dma_device_type": 1 00:09:15.832 }, 00:09:15.832 { 00:09:15.832 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:15.832 "dma_device_type": 2 00:09:15.832 } 00:09:15.832 ], 00:09:15.832 "driver_specific": {} 00:09:15.832 } 00:09:15.832 ] 00:09:15.832 15:17:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.832 15:17:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:15.832 15:17:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:09:15.832 15:17:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:15.832 15:17:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:15.832 15:17:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:15.832 15:17:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:15.832 15:17:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:15.832 15:17:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:15.832 15:17:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:15.832 15:17:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:15.832 15:17:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:15.832 15:17:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:15.832 15:17:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:15.832 15:17:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.832 15:17:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.832 15:17:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.832 15:17:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:15.832 "name": "Existed_Raid", 00:09:15.832 "uuid": "9779aa70-86bb-4275-8589-ca2da2b80467", 00:09:15.832 "strip_size_kb": 64, 00:09:15.832 "state": "online", 00:09:15.832 "raid_level": "concat", 00:09:15.832 "superblock": true, 00:09:15.832 "num_base_bdevs": 3, 00:09:15.832 "num_base_bdevs_discovered": 3, 00:09:15.832 "num_base_bdevs_operational": 3, 00:09:15.832 "base_bdevs_list": [ 00:09:15.832 { 00:09:15.832 "name": "NewBaseBdev", 00:09:15.832 "uuid": "89464b6f-f562-49c2-b67d-d49442949d39", 00:09:15.832 "is_configured": true, 00:09:15.832 "data_offset": 2048, 00:09:15.832 "data_size": 63488 00:09:15.832 }, 00:09:15.832 { 00:09:15.832 "name": "BaseBdev2", 00:09:15.832 "uuid": "09275f3b-48cd-48a4-b666-4f77b28319da", 00:09:15.832 "is_configured": true, 00:09:15.832 "data_offset": 2048, 00:09:15.832 "data_size": 63488 00:09:15.833 }, 00:09:15.833 { 00:09:15.833 "name": "BaseBdev3", 00:09:15.833 "uuid": "fe32b40a-1dc1-49ba-9273-92231f46e630", 00:09:15.833 "is_configured": true, 00:09:15.833 "data_offset": 2048, 00:09:15.833 "data_size": 63488 00:09:15.833 } 00:09:15.833 ] 00:09:15.833 }' 00:09:15.833 15:17:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:15.833 15:17:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.092 15:17:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:16.092 15:17:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:16.092 15:17:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:16.092 15:17:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:16.092 15:17:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:16.092 15:17:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:16.092 15:17:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:16.092 15:17:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:16.092 15:17:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.092 15:17:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.092 [2024-11-20 15:17:02.569060] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:16.351 15:17:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.351 15:17:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:16.351 "name": "Existed_Raid", 00:09:16.351 "aliases": [ 00:09:16.351 "9779aa70-86bb-4275-8589-ca2da2b80467" 00:09:16.351 ], 00:09:16.351 "product_name": "Raid Volume", 00:09:16.351 "block_size": 512, 00:09:16.351 "num_blocks": 190464, 00:09:16.351 "uuid": "9779aa70-86bb-4275-8589-ca2da2b80467", 00:09:16.351 "assigned_rate_limits": { 00:09:16.351 "rw_ios_per_sec": 0, 00:09:16.351 "rw_mbytes_per_sec": 0, 00:09:16.351 "r_mbytes_per_sec": 0, 00:09:16.351 "w_mbytes_per_sec": 0 00:09:16.351 }, 00:09:16.351 "claimed": false, 00:09:16.351 "zoned": false, 00:09:16.351 "supported_io_types": { 00:09:16.351 "read": true, 00:09:16.351 "write": true, 00:09:16.351 "unmap": true, 00:09:16.351 "flush": true, 00:09:16.351 "reset": true, 00:09:16.351 "nvme_admin": false, 00:09:16.351 "nvme_io": false, 00:09:16.351 "nvme_io_md": false, 00:09:16.351 "write_zeroes": true, 00:09:16.351 "zcopy": false, 00:09:16.351 "get_zone_info": false, 00:09:16.351 "zone_management": false, 00:09:16.351 "zone_append": false, 00:09:16.351 "compare": false, 00:09:16.351 "compare_and_write": false, 00:09:16.351 "abort": false, 00:09:16.351 "seek_hole": false, 00:09:16.351 "seek_data": false, 00:09:16.351 "copy": false, 00:09:16.351 "nvme_iov_md": false 00:09:16.351 }, 00:09:16.351 "memory_domains": [ 00:09:16.351 { 00:09:16.351 "dma_device_id": "system", 00:09:16.351 "dma_device_type": 1 00:09:16.351 }, 00:09:16.351 { 00:09:16.351 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:16.351 "dma_device_type": 2 00:09:16.351 }, 00:09:16.351 { 00:09:16.351 "dma_device_id": "system", 00:09:16.351 "dma_device_type": 1 00:09:16.351 }, 00:09:16.351 { 00:09:16.351 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:16.351 "dma_device_type": 2 00:09:16.351 }, 00:09:16.351 { 00:09:16.351 "dma_device_id": "system", 00:09:16.351 "dma_device_type": 1 00:09:16.351 }, 00:09:16.351 { 00:09:16.351 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:16.351 "dma_device_type": 2 00:09:16.351 } 00:09:16.351 ], 00:09:16.351 "driver_specific": { 00:09:16.351 "raid": { 00:09:16.351 "uuid": "9779aa70-86bb-4275-8589-ca2da2b80467", 00:09:16.351 "strip_size_kb": 64, 00:09:16.351 "state": "online", 00:09:16.351 "raid_level": "concat", 00:09:16.351 "superblock": true, 00:09:16.351 "num_base_bdevs": 3, 00:09:16.351 "num_base_bdevs_discovered": 3, 00:09:16.351 "num_base_bdevs_operational": 3, 00:09:16.351 "base_bdevs_list": [ 00:09:16.351 { 00:09:16.351 "name": "NewBaseBdev", 00:09:16.351 "uuid": "89464b6f-f562-49c2-b67d-d49442949d39", 00:09:16.351 "is_configured": true, 00:09:16.351 "data_offset": 2048, 00:09:16.351 "data_size": 63488 00:09:16.351 }, 00:09:16.351 { 00:09:16.351 "name": "BaseBdev2", 00:09:16.351 "uuid": "09275f3b-48cd-48a4-b666-4f77b28319da", 00:09:16.351 "is_configured": true, 00:09:16.351 "data_offset": 2048, 00:09:16.351 "data_size": 63488 00:09:16.351 }, 00:09:16.351 { 00:09:16.351 "name": "BaseBdev3", 00:09:16.351 "uuid": "fe32b40a-1dc1-49ba-9273-92231f46e630", 00:09:16.351 "is_configured": true, 00:09:16.351 "data_offset": 2048, 00:09:16.351 "data_size": 63488 00:09:16.351 } 00:09:16.351 ] 00:09:16.351 } 00:09:16.351 } 00:09:16.351 }' 00:09:16.351 15:17:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:16.351 15:17:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:16.351 BaseBdev2 00:09:16.351 BaseBdev3' 00:09:16.351 15:17:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:16.351 15:17:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:16.351 15:17:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:16.351 15:17:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:16.351 15:17:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:16.351 15:17:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.351 15:17:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.351 15:17:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.351 15:17:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:16.351 15:17:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:16.351 15:17:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:16.352 15:17:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:16.352 15:17:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.352 15:17:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.352 15:17:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:16.352 15:17:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.352 15:17:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:16.352 15:17:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:16.352 15:17:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:16.352 15:17:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:16.352 15:17:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:16.352 15:17:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.352 15:17:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.352 15:17:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.352 15:17:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:16.352 15:17:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:16.352 15:17:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:16.352 15:17:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.352 15:17:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.352 [2024-11-20 15:17:02.816579] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:16.352 [2024-11-20 15:17:02.816607] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:16.352 [2024-11-20 15:17:02.816695] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:16.352 [2024-11-20 15:17:02.816749] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:16.352 [2024-11-20 15:17:02.816763] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:09:16.352 15:17:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.352 15:17:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 66094 00:09:16.352 15:17:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 66094 ']' 00:09:16.352 15:17:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 66094 00:09:16.352 15:17:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:09:16.352 15:17:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:16.352 15:17:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66094 00:09:16.613 killing process with pid 66094 00:09:16.613 15:17:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:16.613 15:17:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:16.613 15:17:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66094' 00:09:16.613 15:17:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 66094 00:09:16.613 [2024-11-20 15:17:02.866934] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:16.613 15:17:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 66094 00:09:16.873 [2024-11-20 15:17:03.173186] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:18.248 15:17:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:09:18.249 ************************************ 00:09:18.249 END TEST raid_state_function_test_sb 00:09:18.249 ************************************ 00:09:18.249 00:09:18.249 real 0m10.368s 00:09:18.249 user 0m16.456s 00:09:18.249 sys 0m2.009s 00:09:18.249 15:17:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:18.249 15:17:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.249 15:17:04 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:09:18.249 15:17:04 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:18.249 15:17:04 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:18.249 15:17:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:18.249 ************************************ 00:09:18.249 START TEST raid_superblock_test 00:09:18.249 ************************************ 00:09:18.249 15:17:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 3 00:09:18.249 15:17:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:09:18.249 15:17:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:09:18.249 15:17:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:09:18.249 15:17:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:09:18.249 15:17:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:09:18.249 15:17:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:09:18.249 15:17:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:09:18.249 15:17:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:09:18.249 15:17:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:09:18.249 15:17:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:09:18.249 15:17:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:09:18.249 15:17:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:09:18.249 15:17:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:09:18.249 15:17:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:09:18.249 15:17:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:09:18.249 15:17:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:09:18.249 15:17:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=66716 00:09:18.249 15:17:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:09:18.249 15:17:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 66716 00:09:18.249 15:17:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 66716 ']' 00:09:18.249 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:18.249 15:17:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:18.249 15:17:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:18.249 15:17:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:18.249 15:17:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:18.249 15:17:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.249 [2024-11-20 15:17:04.496305] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:09:18.249 [2024-11-20 15:17:04.496603] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66716 ] 00:09:18.249 [2024-11-20 15:17:04.673033] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:18.507 [2024-11-20 15:17:04.788467] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:18.765 [2024-11-20 15:17:04.996063] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:18.765 [2024-11-20 15:17:04.996123] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:19.024 15:17:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:19.024 15:17:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:09:19.024 15:17:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:09:19.024 15:17:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:19.024 15:17:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:09:19.024 15:17:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:09:19.024 15:17:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:09:19.024 15:17:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:19.024 15:17:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:19.024 15:17:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:19.024 15:17:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:09:19.024 15:17:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.024 15:17:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.024 malloc1 00:09:19.024 15:17:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.024 15:17:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:19.024 15:17:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.024 15:17:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.024 [2024-11-20 15:17:05.394685] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:19.024 [2024-11-20 15:17:05.394890] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:19.024 [2024-11-20 15:17:05.394957] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:09:19.024 [2024-11-20 15:17:05.395046] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:19.024 [2024-11-20 15:17:05.397878] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:19.024 [2024-11-20 15:17:05.397919] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:19.024 pt1 00:09:19.024 15:17:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.024 15:17:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:19.024 15:17:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:19.024 15:17:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:09:19.024 15:17:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:09:19.024 15:17:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:09:19.024 15:17:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:19.024 15:17:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:19.024 15:17:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:19.024 15:17:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:09:19.024 15:17:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.024 15:17:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.024 malloc2 00:09:19.024 15:17:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.024 15:17:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:19.024 15:17:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.024 15:17:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.024 [2024-11-20 15:17:05.444222] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:19.024 [2024-11-20 15:17:05.444393] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:19.024 [2024-11-20 15:17:05.444431] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:09:19.024 [2024-11-20 15:17:05.444443] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:19.024 [2024-11-20 15:17:05.446787] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:19.024 [2024-11-20 15:17:05.446824] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:19.024 pt2 00:09:19.024 15:17:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.024 15:17:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:19.024 15:17:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:19.024 15:17:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:09:19.024 15:17:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:09:19.024 15:17:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:09:19.024 15:17:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:19.024 15:17:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:19.024 15:17:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:19.024 15:17:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:09:19.024 15:17:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.024 15:17:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.024 malloc3 00:09:19.024 15:17:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.024 15:17:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:19.024 15:17:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.024 15:17:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.315 [2024-11-20 15:17:05.507355] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:19.315 [2024-11-20 15:17:05.507514] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:19.315 [2024-11-20 15:17:05.507573] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:09:19.315 [2024-11-20 15:17:05.507690] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:19.315 [2024-11-20 15:17:05.510009] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:19.315 [2024-11-20 15:17:05.510140] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:19.315 pt3 00:09:19.315 15:17:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.315 15:17:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:19.315 15:17:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:19.315 15:17:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:09:19.315 15:17:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.315 15:17:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.315 [2024-11-20 15:17:05.519397] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:19.315 [2024-11-20 15:17:05.521434] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:19.315 [2024-11-20 15:17:05.521498] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:19.315 [2024-11-20 15:17:05.521640] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:09:19.315 [2024-11-20 15:17:05.521672] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:19.315 [2024-11-20 15:17:05.521918] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:19.315 [2024-11-20 15:17:05.522059] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:09:19.315 [2024-11-20 15:17:05.522069] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:09:19.315 [2024-11-20 15:17:05.522204] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:19.315 15:17:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.315 15:17:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:19.315 15:17:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:19.315 15:17:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:19.315 15:17:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:19.315 15:17:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:19.315 15:17:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:19.315 15:17:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:19.315 15:17:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:19.315 15:17:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:19.315 15:17:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:19.315 15:17:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:19.315 15:17:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:19.315 15:17:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.315 15:17:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.315 15:17:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.315 15:17:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:19.315 "name": "raid_bdev1", 00:09:19.315 "uuid": "73a62280-c30a-4650-85e3-a58f1f21541b", 00:09:19.315 "strip_size_kb": 64, 00:09:19.315 "state": "online", 00:09:19.315 "raid_level": "concat", 00:09:19.315 "superblock": true, 00:09:19.315 "num_base_bdevs": 3, 00:09:19.315 "num_base_bdevs_discovered": 3, 00:09:19.315 "num_base_bdevs_operational": 3, 00:09:19.315 "base_bdevs_list": [ 00:09:19.315 { 00:09:19.315 "name": "pt1", 00:09:19.315 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:19.315 "is_configured": true, 00:09:19.315 "data_offset": 2048, 00:09:19.315 "data_size": 63488 00:09:19.315 }, 00:09:19.315 { 00:09:19.315 "name": "pt2", 00:09:19.315 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:19.315 "is_configured": true, 00:09:19.315 "data_offset": 2048, 00:09:19.315 "data_size": 63488 00:09:19.315 }, 00:09:19.315 { 00:09:19.315 "name": "pt3", 00:09:19.315 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:19.315 "is_configured": true, 00:09:19.315 "data_offset": 2048, 00:09:19.315 "data_size": 63488 00:09:19.315 } 00:09:19.315 ] 00:09:19.315 }' 00:09:19.315 15:17:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:19.315 15:17:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.574 15:17:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:09:19.574 15:17:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:19.574 15:17:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:19.574 15:17:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:19.574 15:17:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:19.574 15:17:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:19.574 15:17:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:19.574 15:17:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:19.574 15:17:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.574 15:17:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.574 [2024-11-20 15:17:05.931491] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:19.574 15:17:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.574 15:17:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:19.574 "name": "raid_bdev1", 00:09:19.574 "aliases": [ 00:09:19.574 "73a62280-c30a-4650-85e3-a58f1f21541b" 00:09:19.574 ], 00:09:19.574 "product_name": "Raid Volume", 00:09:19.574 "block_size": 512, 00:09:19.574 "num_blocks": 190464, 00:09:19.574 "uuid": "73a62280-c30a-4650-85e3-a58f1f21541b", 00:09:19.574 "assigned_rate_limits": { 00:09:19.574 "rw_ios_per_sec": 0, 00:09:19.574 "rw_mbytes_per_sec": 0, 00:09:19.574 "r_mbytes_per_sec": 0, 00:09:19.574 "w_mbytes_per_sec": 0 00:09:19.574 }, 00:09:19.574 "claimed": false, 00:09:19.574 "zoned": false, 00:09:19.574 "supported_io_types": { 00:09:19.574 "read": true, 00:09:19.574 "write": true, 00:09:19.574 "unmap": true, 00:09:19.574 "flush": true, 00:09:19.574 "reset": true, 00:09:19.574 "nvme_admin": false, 00:09:19.574 "nvme_io": false, 00:09:19.574 "nvme_io_md": false, 00:09:19.574 "write_zeroes": true, 00:09:19.574 "zcopy": false, 00:09:19.574 "get_zone_info": false, 00:09:19.574 "zone_management": false, 00:09:19.575 "zone_append": false, 00:09:19.575 "compare": false, 00:09:19.575 "compare_and_write": false, 00:09:19.575 "abort": false, 00:09:19.575 "seek_hole": false, 00:09:19.575 "seek_data": false, 00:09:19.575 "copy": false, 00:09:19.575 "nvme_iov_md": false 00:09:19.575 }, 00:09:19.575 "memory_domains": [ 00:09:19.575 { 00:09:19.575 "dma_device_id": "system", 00:09:19.575 "dma_device_type": 1 00:09:19.575 }, 00:09:19.575 { 00:09:19.575 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:19.575 "dma_device_type": 2 00:09:19.575 }, 00:09:19.575 { 00:09:19.575 "dma_device_id": "system", 00:09:19.575 "dma_device_type": 1 00:09:19.575 }, 00:09:19.575 { 00:09:19.575 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:19.575 "dma_device_type": 2 00:09:19.575 }, 00:09:19.575 { 00:09:19.575 "dma_device_id": "system", 00:09:19.575 "dma_device_type": 1 00:09:19.575 }, 00:09:19.575 { 00:09:19.575 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:19.575 "dma_device_type": 2 00:09:19.575 } 00:09:19.575 ], 00:09:19.575 "driver_specific": { 00:09:19.575 "raid": { 00:09:19.575 "uuid": "73a62280-c30a-4650-85e3-a58f1f21541b", 00:09:19.575 "strip_size_kb": 64, 00:09:19.575 "state": "online", 00:09:19.575 "raid_level": "concat", 00:09:19.575 "superblock": true, 00:09:19.575 "num_base_bdevs": 3, 00:09:19.575 "num_base_bdevs_discovered": 3, 00:09:19.575 "num_base_bdevs_operational": 3, 00:09:19.575 "base_bdevs_list": [ 00:09:19.575 { 00:09:19.575 "name": "pt1", 00:09:19.575 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:19.575 "is_configured": true, 00:09:19.575 "data_offset": 2048, 00:09:19.575 "data_size": 63488 00:09:19.575 }, 00:09:19.575 { 00:09:19.575 "name": "pt2", 00:09:19.575 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:19.575 "is_configured": true, 00:09:19.575 "data_offset": 2048, 00:09:19.575 "data_size": 63488 00:09:19.575 }, 00:09:19.575 { 00:09:19.575 "name": "pt3", 00:09:19.575 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:19.575 "is_configured": true, 00:09:19.575 "data_offset": 2048, 00:09:19.575 "data_size": 63488 00:09:19.575 } 00:09:19.575 ] 00:09:19.575 } 00:09:19.575 } 00:09:19.575 }' 00:09:19.575 15:17:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:19.575 15:17:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:19.575 pt2 00:09:19.575 pt3' 00:09:19.575 15:17:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:19.575 15:17:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:19.575 15:17:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:19.575 15:17:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:19.575 15:17:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.575 15:17:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.575 15:17:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:19.834 15:17:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.834 15:17:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:19.834 15:17:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:19.834 15:17:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:19.834 15:17:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:19.834 15:17:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.834 15:17:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:19.834 15:17:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.834 15:17:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.834 15:17:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:19.834 15:17:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:19.834 15:17:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:19.834 15:17:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:19.834 15:17:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.834 15:17:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:19.834 15:17:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.834 15:17:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.834 15:17:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:19.834 15:17:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:19.834 15:17:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:09:19.834 15:17:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:19.834 15:17:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.834 15:17:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.834 [2024-11-20 15:17:06.195425] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:19.834 15:17:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.834 15:17:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=73a62280-c30a-4650-85e3-a58f1f21541b 00:09:19.834 15:17:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 73a62280-c30a-4650-85e3-a58f1f21541b ']' 00:09:19.834 15:17:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:19.834 15:17:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.834 15:17:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.835 [2024-11-20 15:17:06.235160] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:19.835 [2024-11-20 15:17:06.235285] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:19.835 [2024-11-20 15:17:06.235420] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:19.835 [2024-11-20 15:17:06.235581] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:19.835 [2024-11-20 15:17:06.235755] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:09:19.835 15:17:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.835 15:17:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:19.835 15:17:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:09:19.835 15:17:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.835 15:17:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.835 15:17:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.835 15:17:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:09:19.835 15:17:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:09:19.835 15:17:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:19.835 15:17:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:09:19.835 15:17:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.835 15:17:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.835 15:17:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.835 15:17:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:19.835 15:17:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:09:19.835 15:17:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.835 15:17:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.835 15:17:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.835 15:17:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:19.835 15:17:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:09:19.835 15:17:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.835 15:17:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.094 15:17:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.094 15:17:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:09:20.094 15:17:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.094 15:17:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:09:20.094 15:17:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.094 15:17:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.094 15:17:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:09:20.094 15:17:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:20.094 15:17:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:09:20.094 15:17:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:20.094 15:17:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:09:20.094 15:17:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:20.094 15:17:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:09:20.094 15:17:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:20.094 15:17:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:20.094 15:17:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.094 15:17:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.094 [2024-11-20 15:17:06.375219] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:09:20.094 [2024-11-20 15:17:06.377282] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:09:20.094 [2024-11-20 15:17:06.377330] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:09:20.094 [2024-11-20 15:17:06.377378] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:09:20.094 [2024-11-20 15:17:06.377431] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:09:20.094 [2024-11-20 15:17:06.377453] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:09:20.094 [2024-11-20 15:17:06.377473] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:20.094 [2024-11-20 15:17:06.377484] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:09:20.094 request: 00:09:20.094 { 00:09:20.094 "name": "raid_bdev1", 00:09:20.094 "raid_level": "concat", 00:09:20.094 "base_bdevs": [ 00:09:20.094 "malloc1", 00:09:20.094 "malloc2", 00:09:20.094 "malloc3" 00:09:20.094 ], 00:09:20.094 "strip_size_kb": 64, 00:09:20.094 "superblock": false, 00:09:20.094 "method": "bdev_raid_create", 00:09:20.094 "req_id": 1 00:09:20.094 } 00:09:20.095 Got JSON-RPC error response 00:09:20.095 response: 00:09:20.095 { 00:09:20.095 "code": -17, 00:09:20.095 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:09:20.095 } 00:09:20.095 15:17:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:20.095 15:17:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:09:20.095 15:17:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:20.095 15:17:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:20.095 15:17:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:20.095 15:17:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:09:20.095 15:17:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:20.095 15:17:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.095 15:17:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.095 15:17:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.095 15:17:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:09:20.095 15:17:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:09:20.095 15:17:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:20.095 15:17:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.095 15:17:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.095 [2024-11-20 15:17:06.435166] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:20.095 [2024-11-20 15:17:06.435216] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:20.095 [2024-11-20 15:17:06.435238] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:09:20.095 [2024-11-20 15:17:06.435250] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:20.095 [2024-11-20 15:17:06.437595] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:20.095 [2024-11-20 15:17:06.437747] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:20.095 [2024-11-20 15:17:06.437839] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:20.095 [2024-11-20 15:17:06.437890] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:20.095 pt1 00:09:20.095 15:17:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.095 15:17:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:09:20.095 15:17:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:20.095 15:17:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:20.095 15:17:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:20.095 15:17:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:20.095 15:17:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:20.095 15:17:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:20.095 15:17:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:20.095 15:17:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:20.095 15:17:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:20.095 15:17:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:20.095 15:17:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:20.095 15:17:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.095 15:17:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.095 15:17:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.095 15:17:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:20.095 "name": "raid_bdev1", 00:09:20.095 "uuid": "73a62280-c30a-4650-85e3-a58f1f21541b", 00:09:20.095 "strip_size_kb": 64, 00:09:20.095 "state": "configuring", 00:09:20.095 "raid_level": "concat", 00:09:20.095 "superblock": true, 00:09:20.095 "num_base_bdevs": 3, 00:09:20.095 "num_base_bdevs_discovered": 1, 00:09:20.095 "num_base_bdevs_operational": 3, 00:09:20.095 "base_bdevs_list": [ 00:09:20.095 { 00:09:20.095 "name": "pt1", 00:09:20.095 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:20.095 "is_configured": true, 00:09:20.095 "data_offset": 2048, 00:09:20.095 "data_size": 63488 00:09:20.095 }, 00:09:20.095 { 00:09:20.095 "name": null, 00:09:20.095 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:20.095 "is_configured": false, 00:09:20.095 "data_offset": 2048, 00:09:20.095 "data_size": 63488 00:09:20.095 }, 00:09:20.095 { 00:09:20.095 "name": null, 00:09:20.095 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:20.095 "is_configured": false, 00:09:20.095 "data_offset": 2048, 00:09:20.095 "data_size": 63488 00:09:20.095 } 00:09:20.095 ] 00:09:20.095 }' 00:09:20.095 15:17:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:20.095 15:17:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.354 15:17:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:09:20.355 15:17:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:20.355 15:17:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.355 15:17:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.355 [2024-11-20 15:17:06.835211] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:20.614 [2024-11-20 15:17:06.835423] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:20.614 [2024-11-20 15:17:06.835464] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:09:20.614 [2024-11-20 15:17:06.835476] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:20.614 [2024-11-20 15:17:06.835939] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:20.614 [2024-11-20 15:17:06.835961] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:20.614 [2024-11-20 15:17:06.836053] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:20.614 [2024-11-20 15:17:06.836082] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:20.614 pt2 00:09:20.614 15:17:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.614 15:17:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:09:20.614 15:17:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.614 15:17:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.614 [2024-11-20 15:17:06.843201] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:09:20.614 15:17:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.614 15:17:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:09:20.614 15:17:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:20.614 15:17:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:20.614 15:17:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:20.614 15:17:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:20.614 15:17:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:20.614 15:17:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:20.614 15:17:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:20.614 15:17:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:20.614 15:17:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:20.614 15:17:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:20.614 15:17:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:20.614 15:17:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.614 15:17:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.614 15:17:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.614 15:17:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:20.614 "name": "raid_bdev1", 00:09:20.614 "uuid": "73a62280-c30a-4650-85e3-a58f1f21541b", 00:09:20.614 "strip_size_kb": 64, 00:09:20.614 "state": "configuring", 00:09:20.614 "raid_level": "concat", 00:09:20.614 "superblock": true, 00:09:20.614 "num_base_bdevs": 3, 00:09:20.614 "num_base_bdevs_discovered": 1, 00:09:20.614 "num_base_bdevs_operational": 3, 00:09:20.614 "base_bdevs_list": [ 00:09:20.614 { 00:09:20.614 "name": "pt1", 00:09:20.614 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:20.614 "is_configured": true, 00:09:20.614 "data_offset": 2048, 00:09:20.614 "data_size": 63488 00:09:20.614 }, 00:09:20.614 { 00:09:20.614 "name": null, 00:09:20.614 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:20.614 "is_configured": false, 00:09:20.614 "data_offset": 0, 00:09:20.614 "data_size": 63488 00:09:20.614 }, 00:09:20.614 { 00:09:20.614 "name": null, 00:09:20.614 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:20.614 "is_configured": false, 00:09:20.614 "data_offset": 2048, 00:09:20.614 "data_size": 63488 00:09:20.614 } 00:09:20.614 ] 00:09:20.614 }' 00:09:20.614 15:17:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:20.614 15:17:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.873 15:17:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:09:20.873 15:17:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:20.873 15:17:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:20.873 15:17:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.873 15:17:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.873 [2024-11-20 15:17:07.247190] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:20.873 [2024-11-20 15:17:07.247265] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:20.873 [2024-11-20 15:17:07.247287] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:09:20.873 [2024-11-20 15:17:07.247301] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:20.873 [2024-11-20 15:17:07.247784] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:20.873 [2024-11-20 15:17:07.247814] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:20.873 [2024-11-20 15:17:07.247900] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:20.873 [2024-11-20 15:17:07.247932] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:20.873 pt2 00:09:20.873 15:17:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.873 15:17:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:20.873 15:17:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:20.873 15:17:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:20.873 15:17:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.873 15:17:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.873 [2024-11-20 15:17:07.259166] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:20.873 [2024-11-20 15:17:07.259222] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:20.873 [2024-11-20 15:17:07.259239] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:09:20.873 [2024-11-20 15:17:07.259253] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:20.873 [2024-11-20 15:17:07.259633] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:20.873 [2024-11-20 15:17:07.259675] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:20.873 [2024-11-20 15:17:07.259741] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:20.873 [2024-11-20 15:17:07.259764] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:20.873 [2024-11-20 15:17:07.259874] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:20.873 [2024-11-20 15:17:07.259892] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:20.873 [2024-11-20 15:17:07.260143] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:09:20.873 [2024-11-20 15:17:07.260286] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:20.873 [2024-11-20 15:17:07.260304] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:20.873 [2024-11-20 15:17:07.260438] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:20.873 pt3 00:09:20.873 15:17:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.873 15:17:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:20.873 15:17:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:20.873 15:17:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:20.873 15:17:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:20.873 15:17:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:20.873 15:17:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:20.873 15:17:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:20.874 15:17:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:20.874 15:17:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:20.874 15:17:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:20.874 15:17:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:20.874 15:17:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:20.874 15:17:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:20.874 15:17:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.874 15:17:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.874 15:17:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:20.874 15:17:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.874 15:17:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:20.874 "name": "raid_bdev1", 00:09:20.874 "uuid": "73a62280-c30a-4650-85e3-a58f1f21541b", 00:09:20.874 "strip_size_kb": 64, 00:09:20.874 "state": "online", 00:09:20.874 "raid_level": "concat", 00:09:20.874 "superblock": true, 00:09:20.874 "num_base_bdevs": 3, 00:09:20.874 "num_base_bdevs_discovered": 3, 00:09:20.874 "num_base_bdevs_operational": 3, 00:09:20.874 "base_bdevs_list": [ 00:09:20.874 { 00:09:20.874 "name": "pt1", 00:09:20.874 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:20.874 "is_configured": true, 00:09:20.874 "data_offset": 2048, 00:09:20.874 "data_size": 63488 00:09:20.874 }, 00:09:20.874 { 00:09:20.874 "name": "pt2", 00:09:20.874 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:20.874 "is_configured": true, 00:09:20.874 "data_offset": 2048, 00:09:20.874 "data_size": 63488 00:09:20.874 }, 00:09:20.874 { 00:09:20.874 "name": "pt3", 00:09:20.874 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:20.874 "is_configured": true, 00:09:20.874 "data_offset": 2048, 00:09:20.874 "data_size": 63488 00:09:20.874 } 00:09:20.874 ] 00:09:20.874 }' 00:09:20.874 15:17:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:20.874 15:17:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.442 15:17:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:09:21.442 15:17:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:21.442 15:17:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:21.442 15:17:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:21.442 15:17:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:21.442 15:17:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:21.442 15:17:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:21.442 15:17:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:21.442 15:17:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.442 15:17:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.442 [2024-11-20 15:17:07.659624] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:21.442 15:17:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.442 15:17:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:21.442 "name": "raid_bdev1", 00:09:21.442 "aliases": [ 00:09:21.442 "73a62280-c30a-4650-85e3-a58f1f21541b" 00:09:21.442 ], 00:09:21.442 "product_name": "Raid Volume", 00:09:21.442 "block_size": 512, 00:09:21.442 "num_blocks": 190464, 00:09:21.442 "uuid": "73a62280-c30a-4650-85e3-a58f1f21541b", 00:09:21.442 "assigned_rate_limits": { 00:09:21.442 "rw_ios_per_sec": 0, 00:09:21.442 "rw_mbytes_per_sec": 0, 00:09:21.442 "r_mbytes_per_sec": 0, 00:09:21.442 "w_mbytes_per_sec": 0 00:09:21.442 }, 00:09:21.442 "claimed": false, 00:09:21.442 "zoned": false, 00:09:21.442 "supported_io_types": { 00:09:21.442 "read": true, 00:09:21.442 "write": true, 00:09:21.442 "unmap": true, 00:09:21.442 "flush": true, 00:09:21.442 "reset": true, 00:09:21.442 "nvme_admin": false, 00:09:21.442 "nvme_io": false, 00:09:21.442 "nvme_io_md": false, 00:09:21.442 "write_zeroes": true, 00:09:21.442 "zcopy": false, 00:09:21.442 "get_zone_info": false, 00:09:21.442 "zone_management": false, 00:09:21.442 "zone_append": false, 00:09:21.442 "compare": false, 00:09:21.442 "compare_and_write": false, 00:09:21.442 "abort": false, 00:09:21.442 "seek_hole": false, 00:09:21.442 "seek_data": false, 00:09:21.442 "copy": false, 00:09:21.442 "nvme_iov_md": false 00:09:21.442 }, 00:09:21.442 "memory_domains": [ 00:09:21.442 { 00:09:21.442 "dma_device_id": "system", 00:09:21.442 "dma_device_type": 1 00:09:21.442 }, 00:09:21.443 { 00:09:21.443 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:21.443 "dma_device_type": 2 00:09:21.443 }, 00:09:21.443 { 00:09:21.443 "dma_device_id": "system", 00:09:21.443 "dma_device_type": 1 00:09:21.443 }, 00:09:21.443 { 00:09:21.443 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:21.443 "dma_device_type": 2 00:09:21.443 }, 00:09:21.443 { 00:09:21.443 "dma_device_id": "system", 00:09:21.443 "dma_device_type": 1 00:09:21.443 }, 00:09:21.443 { 00:09:21.443 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:21.443 "dma_device_type": 2 00:09:21.443 } 00:09:21.443 ], 00:09:21.443 "driver_specific": { 00:09:21.443 "raid": { 00:09:21.443 "uuid": "73a62280-c30a-4650-85e3-a58f1f21541b", 00:09:21.443 "strip_size_kb": 64, 00:09:21.443 "state": "online", 00:09:21.443 "raid_level": "concat", 00:09:21.443 "superblock": true, 00:09:21.443 "num_base_bdevs": 3, 00:09:21.443 "num_base_bdevs_discovered": 3, 00:09:21.443 "num_base_bdevs_operational": 3, 00:09:21.443 "base_bdevs_list": [ 00:09:21.443 { 00:09:21.443 "name": "pt1", 00:09:21.443 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:21.443 "is_configured": true, 00:09:21.443 "data_offset": 2048, 00:09:21.443 "data_size": 63488 00:09:21.443 }, 00:09:21.443 { 00:09:21.443 "name": "pt2", 00:09:21.443 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:21.443 "is_configured": true, 00:09:21.443 "data_offset": 2048, 00:09:21.443 "data_size": 63488 00:09:21.443 }, 00:09:21.443 { 00:09:21.443 "name": "pt3", 00:09:21.443 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:21.443 "is_configured": true, 00:09:21.443 "data_offset": 2048, 00:09:21.443 "data_size": 63488 00:09:21.443 } 00:09:21.443 ] 00:09:21.443 } 00:09:21.443 } 00:09:21.443 }' 00:09:21.443 15:17:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:21.443 15:17:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:21.443 pt2 00:09:21.443 pt3' 00:09:21.443 15:17:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:21.443 15:17:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:21.443 15:17:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:21.443 15:17:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:21.443 15:17:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.443 15:17:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.443 15:17:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:21.443 15:17:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.443 15:17:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:21.443 15:17:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:21.443 15:17:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:21.443 15:17:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:21.443 15:17:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:21.443 15:17:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.443 15:17:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.443 15:17:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.443 15:17:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:21.443 15:17:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:21.443 15:17:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:21.443 15:17:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:21.443 15:17:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:21.443 15:17:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.443 15:17:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.443 15:17:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.702 15:17:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:21.702 15:17:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:21.702 15:17:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:21.702 15:17:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.702 15:17:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.702 15:17:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:09:21.702 [2024-11-20 15:17:07.935470] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:21.702 15:17:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.702 15:17:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 73a62280-c30a-4650-85e3-a58f1f21541b '!=' 73a62280-c30a-4650-85e3-a58f1f21541b ']' 00:09:21.702 15:17:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:09:21.702 15:17:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:21.702 15:17:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:21.702 15:17:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 66716 00:09:21.702 15:17:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 66716 ']' 00:09:21.702 15:17:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 66716 00:09:21.702 15:17:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:09:21.702 15:17:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:21.702 15:17:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66716 00:09:21.702 15:17:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:21.702 15:17:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:21.702 killing process with pid 66716 00:09:21.702 15:17:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66716' 00:09:21.702 15:17:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 66716 00:09:21.702 [2024-11-20 15:17:08.008968] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:21.702 [2024-11-20 15:17:08.009083] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:21.702 15:17:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 66716 00:09:21.702 [2024-11-20 15:17:08.009152] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:21.702 [2024-11-20 15:17:08.009169] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:21.961 [2024-11-20 15:17:08.313432] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:23.337 15:17:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:09:23.337 00:09:23.337 real 0m5.054s 00:09:23.337 user 0m7.165s 00:09:23.337 sys 0m0.978s 00:09:23.337 15:17:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:23.337 15:17:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.337 ************************************ 00:09:23.337 END TEST raid_superblock_test 00:09:23.337 ************************************ 00:09:23.337 15:17:09 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 3 read 00:09:23.337 15:17:09 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:23.337 15:17:09 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:23.337 15:17:09 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:23.337 ************************************ 00:09:23.337 START TEST raid_read_error_test 00:09:23.337 ************************************ 00:09:23.337 15:17:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 3 read 00:09:23.337 15:17:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:09:23.337 15:17:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:23.337 15:17:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:09:23.337 15:17:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:23.337 15:17:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:23.337 15:17:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:23.337 15:17:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:23.337 15:17:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:23.337 15:17:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:23.337 15:17:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:23.337 15:17:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:23.337 15:17:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:23.337 15:17:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:23.337 15:17:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:23.337 15:17:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:23.337 15:17:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:23.337 15:17:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:23.337 15:17:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:23.337 15:17:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:23.337 15:17:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:23.337 15:17:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:23.337 15:17:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:09:23.337 15:17:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:23.337 15:17:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:23.337 15:17:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:23.337 15:17:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.x0O8ndast6 00:09:23.337 15:17:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=66973 00:09:23.337 15:17:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:23.337 15:17:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 66973 00:09:23.337 15:17:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 66973 ']' 00:09:23.337 15:17:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:23.337 15:17:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:23.337 15:17:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:23.337 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:23.337 15:17:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:23.337 15:17:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.337 [2024-11-20 15:17:09.645410] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:09:23.337 [2024-11-20 15:17:09.645532] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66973 ] 00:09:23.596 [2024-11-20 15:17:09.825664] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:23.596 [2024-11-20 15:17:09.937360] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:23.855 [2024-11-20 15:17:10.143329] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:23.855 [2024-11-20 15:17:10.143376] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:24.114 15:17:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:24.114 15:17:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:09:24.114 15:17:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:24.114 15:17:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:24.114 15:17:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.114 15:17:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.114 BaseBdev1_malloc 00:09:24.114 15:17:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.114 15:17:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:24.114 15:17:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.114 15:17:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.114 true 00:09:24.114 15:17:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.114 15:17:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:24.114 15:17:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.114 15:17:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.114 [2024-11-20 15:17:10.538240] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:24.114 [2024-11-20 15:17:10.538296] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:24.114 [2024-11-20 15:17:10.538322] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:24.114 [2024-11-20 15:17:10.538338] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:24.114 [2024-11-20 15:17:10.540687] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:24.114 [2024-11-20 15:17:10.540723] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:24.114 BaseBdev1 00:09:24.114 15:17:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.114 15:17:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:24.114 15:17:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:24.114 15:17:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.114 15:17:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.114 BaseBdev2_malloc 00:09:24.114 15:17:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.114 15:17:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:24.114 15:17:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.114 15:17:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.417 true 00:09:24.417 15:17:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.417 15:17:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:24.417 15:17:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.417 15:17:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.417 [2024-11-20 15:17:10.603278] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:24.417 [2024-11-20 15:17:10.603334] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:24.417 [2024-11-20 15:17:10.603351] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:24.417 [2024-11-20 15:17:10.603366] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:24.417 [2024-11-20 15:17:10.605711] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:24.417 [2024-11-20 15:17:10.605751] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:24.417 BaseBdev2 00:09:24.417 15:17:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.417 15:17:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:24.417 15:17:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:24.417 15:17:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.417 15:17:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.417 BaseBdev3_malloc 00:09:24.417 15:17:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.417 15:17:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:24.417 15:17:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.417 15:17:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.417 true 00:09:24.417 15:17:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.417 15:17:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:24.417 15:17:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.417 15:17:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.417 [2024-11-20 15:17:10.683087] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:24.417 [2024-11-20 15:17:10.683136] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:24.417 [2024-11-20 15:17:10.683156] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:24.417 [2024-11-20 15:17:10.683169] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:24.417 [2024-11-20 15:17:10.685509] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:24.417 [2024-11-20 15:17:10.685549] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:24.417 BaseBdev3 00:09:24.417 15:17:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.418 15:17:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:24.418 15:17:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.418 15:17:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.418 [2024-11-20 15:17:10.695171] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:24.418 [2024-11-20 15:17:10.697193] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:24.418 [2024-11-20 15:17:10.697270] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:24.418 [2024-11-20 15:17:10.697460] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:24.418 [2024-11-20 15:17:10.697472] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:24.418 [2024-11-20 15:17:10.697768] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:09:24.418 [2024-11-20 15:17:10.697949] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:24.418 [2024-11-20 15:17:10.697976] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:09:24.418 [2024-11-20 15:17:10.698117] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:24.418 15:17:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.418 15:17:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:24.418 15:17:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:24.418 15:17:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:24.418 15:17:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:24.418 15:17:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:24.418 15:17:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:24.418 15:17:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:24.418 15:17:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:24.418 15:17:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:24.418 15:17:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:24.418 15:17:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:24.418 15:17:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.418 15:17:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.418 15:17:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:24.418 15:17:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.418 15:17:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:24.418 "name": "raid_bdev1", 00:09:24.418 "uuid": "22803430-bfc0-4655-b254-95798ab68a18", 00:09:24.418 "strip_size_kb": 64, 00:09:24.418 "state": "online", 00:09:24.418 "raid_level": "concat", 00:09:24.418 "superblock": true, 00:09:24.418 "num_base_bdevs": 3, 00:09:24.418 "num_base_bdevs_discovered": 3, 00:09:24.418 "num_base_bdevs_operational": 3, 00:09:24.418 "base_bdevs_list": [ 00:09:24.418 { 00:09:24.418 "name": "BaseBdev1", 00:09:24.418 "uuid": "0b270ef3-6acc-5f83-beac-114ace303346", 00:09:24.418 "is_configured": true, 00:09:24.418 "data_offset": 2048, 00:09:24.418 "data_size": 63488 00:09:24.418 }, 00:09:24.418 { 00:09:24.418 "name": "BaseBdev2", 00:09:24.418 "uuid": "b042e6bd-7586-5e75-a5e9-dfa1b8ee7322", 00:09:24.418 "is_configured": true, 00:09:24.418 "data_offset": 2048, 00:09:24.418 "data_size": 63488 00:09:24.418 }, 00:09:24.418 { 00:09:24.418 "name": "BaseBdev3", 00:09:24.418 "uuid": "7bfd9091-1b16-5235-9c88-ff3e773cc201", 00:09:24.418 "is_configured": true, 00:09:24.418 "data_offset": 2048, 00:09:24.418 "data_size": 63488 00:09:24.418 } 00:09:24.418 ] 00:09:24.418 }' 00:09:24.418 15:17:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:24.418 15:17:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.676 15:17:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:24.676 15:17:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:24.935 [2024-11-20 15:17:11.212240] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:09:25.873 15:17:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:09:25.873 15:17:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.873 15:17:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.873 15:17:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.873 15:17:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:25.873 15:17:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:09:25.873 15:17:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:09:25.873 15:17:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:25.873 15:17:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:25.873 15:17:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:25.873 15:17:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:25.873 15:17:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:25.873 15:17:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:25.873 15:17:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:25.873 15:17:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:25.873 15:17:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:25.873 15:17:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:25.873 15:17:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:25.873 15:17:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.873 15:17:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:25.873 15:17:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.873 15:17:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.873 15:17:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:25.873 "name": "raid_bdev1", 00:09:25.873 "uuid": "22803430-bfc0-4655-b254-95798ab68a18", 00:09:25.873 "strip_size_kb": 64, 00:09:25.873 "state": "online", 00:09:25.873 "raid_level": "concat", 00:09:25.873 "superblock": true, 00:09:25.873 "num_base_bdevs": 3, 00:09:25.873 "num_base_bdevs_discovered": 3, 00:09:25.873 "num_base_bdevs_operational": 3, 00:09:25.873 "base_bdevs_list": [ 00:09:25.873 { 00:09:25.873 "name": "BaseBdev1", 00:09:25.873 "uuid": "0b270ef3-6acc-5f83-beac-114ace303346", 00:09:25.873 "is_configured": true, 00:09:25.873 "data_offset": 2048, 00:09:25.873 "data_size": 63488 00:09:25.873 }, 00:09:25.873 { 00:09:25.873 "name": "BaseBdev2", 00:09:25.873 "uuid": "b042e6bd-7586-5e75-a5e9-dfa1b8ee7322", 00:09:25.873 "is_configured": true, 00:09:25.873 "data_offset": 2048, 00:09:25.873 "data_size": 63488 00:09:25.873 }, 00:09:25.873 { 00:09:25.873 "name": "BaseBdev3", 00:09:25.873 "uuid": "7bfd9091-1b16-5235-9c88-ff3e773cc201", 00:09:25.873 "is_configured": true, 00:09:25.873 "data_offset": 2048, 00:09:25.873 "data_size": 63488 00:09:25.873 } 00:09:25.873 ] 00:09:25.873 }' 00:09:25.873 15:17:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:25.873 15:17:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.132 15:17:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:26.132 15:17:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.132 15:17:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.132 [2024-11-20 15:17:12.566827] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:26.132 [2024-11-20 15:17:12.566859] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:26.132 [2024-11-20 15:17:12.569729] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:26.132 [2024-11-20 15:17:12.569898] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:26.132 [2024-11-20 15:17:12.569980] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:26.132 [2024-11-20 15:17:12.570258] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:09:26.132 { 00:09:26.132 "results": [ 00:09:26.132 { 00:09:26.132 "job": "raid_bdev1", 00:09:26.132 "core_mask": "0x1", 00:09:26.132 "workload": "randrw", 00:09:26.132 "percentage": 50, 00:09:26.132 "status": "finished", 00:09:26.132 "queue_depth": 1, 00:09:26.132 "io_size": 131072, 00:09:26.132 "runtime": 1.354794, 00:09:26.132 "iops": 16539.04578851102, 00:09:26.132 "mibps": 2067.3807235638774, 00:09:26.132 "io_failed": 1, 00:09:26.132 "io_timeout": 0, 00:09:26.132 "avg_latency_us": 83.25490638025146, 00:09:26.132 "min_latency_us": 27.142168674698794, 00:09:26.132 "max_latency_us": 1394.9429718875501 00:09:26.132 } 00:09:26.132 ], 00:09:26.132 "core_count": 1 00:09:26.132 } 00:09:26.132 15:17:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.132 15:17:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 66973 00:09:26.132 15:17:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 66973 ']' 00:09:26.132 15:17:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 66973 00:09:26.132 15:17:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:09:26.132 15:17:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:26.132 15:17:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66973 00:09:26.390 killing process with pid 66973 00:09:26.390 15:17:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:26.390 15:17:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:26.390 15:17:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66973' 00:09:26.390 15:17:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 66973 00:09:26.390 [2024-11-20 15:17:12.619616] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:26.390 15:17:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 66973 00:09:26.390 [2024-11-20 15:17:12.852484] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:27.769 15:17:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.x0O8ndast6 00:09:27.769 15:17:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:27.769 15:17:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:27.769 15:17:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:09:27.769 15:17:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:09:27.769 15:17:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:27.769 15:17:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:27.769 15:17:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:09:27.769 00:09:27.769 real 0m4.522s 00:09:27.769 user 0m5.344s 00:09:27.769 sys 0m0.600s 00:09:27.769 15:17:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:27.769 15:17:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.769 ************************************ 00:09:27.769 END TEST raid_read_error_test 00:09:27.769 ************************************ 00:09:27.769 15:17:14 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 3 write 00:09:27.769 15:17:14 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:27.769 15:17:14 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:27.769 15:17:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:27.769 ************************************ 00:09:27.769 START TEST raid_write_error_test 00:09:27.769 ************************************ 00:09:27.770 15:17:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 3 write 00:09:27.770 15:17:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:09:27.770 15:17:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:27.770 15:17:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:09:27.770 15:17:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:27.770 15:17:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:27.770 15:17:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:27.770 15:17:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:27.770 15:17:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:27.770 15:17:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:27.770 15:17:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:27.770 15:17:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:27.770 15:17:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:27.770 15:17:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:27.770 15:17:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:27.770 15:17:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:27.770 15:17:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:27.770 15:17:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:27.770 15:17:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:27.770 15:17:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:27.770 15:17:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:27.770 15:17:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:27.770 15:17:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:09:27.770 15:17:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:27.770 15:17:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:27.770 15:17:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:27.770 15:17:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.P6iY7wtNWk 00:09:27.770 15:17:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=67119 00:09:27.770 15:17:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:27.770 15:17:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 67119 00:09:27.770 15:17:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 67119 ']' 00:09:27.770 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:27.770 15:17:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:27.770 15:17:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:27.770 15:17:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:27.770 15:17:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:27.770 15:17:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.770 [2024-11-20 15:17:14.244777] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:09:27.770 [2024-11-20 15:17:14.244900] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67119 ] 00:09:28.029 [2024-11-20 15:17:14.421931] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:28.287 [2024-11-20 15:17:14.533157] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:28.287 [2024-11-20 15:17:14.731074] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:28.287 [2024-11-20 15:17:14.731118] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:28.855 15:17:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:28.855 15:17:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:09:28.855 15:17:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:28.855 15:17:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:28.855 15:17:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.855 15:17:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.855 BaseBdev1_malloc 00:09:28.855 15:17:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.855 15:17:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:28.855 15:17:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.855 15:17:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.855 true 00:09:28.855 15:17:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.855 15:17:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:28.855 15:17:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.855 15:17:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.855 [2024-11-20 15:17:15.146540] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:28.855 [2024-11-20 15:17:15.146601] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:28.855 [2024-11-20 15:17:15.146625] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:28.855 [2024-11-20 15:17:15.146639] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:28.855 [2024-11-20 15:17:15.149073] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:28.855 [2024-11-20 15:17:15.149262] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:28.855 BaseBdev1 00:09:28.855 15:17:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.855 15:17:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:28.855 15:17:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:28.855 15:17:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.855 15:17:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.855 BaseBdev2_malloc 00:09:28.855 15:17:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.855 15:17:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:28.855 15:17:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.855 15:17:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.855 true 00:09:28.855 15:17:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.855 15:17:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:28.855 15:17:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.855 15:17:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.855 [2024-11-20 15:17:15.214341] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:28.855 [2024-11-20 15:17:15.214400] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:28.855 [2024-11-20 15:17:15.214419] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:28.855 [2024-11-20 15:17:15.214433] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:28.855 [2024-11-20 15:17:15.216773] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:28.855 [2024-11-20 15:17:15.216810] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:28.855 BaseBdev2 00:09:28.855 15:17:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.855 15:17:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:28.855 15:17:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:28.855 15:17:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.855 15:17:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.855 BaseBdev3_malloc 00:09:28.855 15:17:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.855 15:17:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:28.855 15:17:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.855 15:17:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.855 true 00:09:28.855 15:17:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.855 15:17:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:28.855 15:17:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.855 15:17:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.855 [2024-11-20 15:17:15.294528] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:28.855 [2024-11-20 15:17:15.294582] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:28.855 [2024-11-20 15:17:15.294603] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:28.855 [2024-11-20 15:17:15.294617] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:28.855 [2024-11-20 15:17:15.296967] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:28.855 [2024-11-20 15:17:15.297010] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:28.855 BaseBdev3 00:09:28.856 15:17:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.856 15:17:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:28.856 15:17:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.856 15:17:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.856 [2024-11-20 15:17:15.306591] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:28.856 [2024-11-20 15:17:15.308963] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:28.856 [2024-11-20 15:17:15.309038] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:28.856 [2024-11-20 15:17:15.309238] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:28.856 [2024-11-20 15:17:15.309252] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:28.856 [2024-11-20 15:17:15.309519] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:09:28.856 [2024-11-20 15:17:15.309689] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:28.856 [2024-11-20 15:17:15.309707] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:09:28.856 [2024-11-20 15:17:15.309891] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:28.856 15:17:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.856 15:17:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:28.856 15:17:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:28.856 15:17:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:28.856 15:17:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:28.856 15:17:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:28.856 15:17:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:28.856 15:17:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:28.856 15:17:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:28.856 15:17:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:28.856 15:17:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:28.856 15:17:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:28.856 15:17:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:28.856 15:17:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.856 15:17:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.114 15:17:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.114 15:17:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:29.114 "name": "raid_bdev1", 00:09:29.114 "uuid": "fc4c86e9-9541-439a-83e0-7115bc7b5932", 00:09:29.114 "strip_size_kb": 64, 00:09:29.114 "state": "online", 00:09:29.114 "raid_level": "concat", 00:09:29.114 "superblock": true, 00:09:29.114 "num_base_bdevs": 3, 00:09:29.114 "num_base_bdevs_discovered": 3, 00:09:29.114 "num_base_bdevs_operational": 3, 00:09:29.114 "base_bdevs_list": [ 00:09:29.114 { 00:09:29.114 "name": "BaseBdev1", 00:09:29.114 "uuid": "421488a7-705e-5019-8bec-38ff96e44831", 00:09:29.114 "is_configured": true, 00:09:29.114 "data_offset": 2048, 00:09:29.114 "data_size": 63488 00:09:29.114 }, 00:09:29.114 { 00:09:29.114 "name": "BaseBdev2", 00:09:29.114 "uuid": "428712a3-647a-5f2a-97e3-712efc6baae6", 00:09:29.114 "is_configured": true, 00:09:29.114 "data_offset": 2048, 00:09:29.114 "data_size": 63488 00:09:29.114 }, 00:09:29.114 { 00:09:29.114 "name": "BaseBdev3", 00:09:29.114 "uuid": "b4d315af-cf49-5cba-b25a-54ff9993e867", 00:09:29.114 "is_configured": true, 00:09:29.114 "data_offset": 2048, 00:09:29.114 "data_size": 63488 00:09:29.114 } 00:09:29.114 ] 00:09:29.114 }' 00:09:29.114 15:17:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:29.114 15:17:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.373 15:17:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:29.373 15:17:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:29.373 [2024-11-20 15:17:15.839202] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:09:30.309 15:17:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:09:30.309 15:17:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.309 15:17:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.309 15:17:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.309 15:17:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:30.309 15:17:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:09:30.309 15:17:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:09:30.309 15:17:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:30.309 15:17:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:30.309 15:17:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:30.309 15:17:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:30.309 15:17:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:30.309 15:17:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:30.309 15:17:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:30.309 15:17:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:30.309 15:17:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:30.309 15:17:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:30.309 15:17:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:30.309 15:17:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:30.309 15:17:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.309 15:17:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.568 15:17:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.568 15:17:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:30.568 "name": "raid_bdev1", 00:09:30.568 "uuid": "fc4c86e9-9541-439a-83e0-7115bc7b5932", 00:09:30.568 "strip_size_kb": 64, 00:09:30.568 "state": "online", 00:09:30.568 "raid_level": "concat", 00:09:30.568 "superblock": true, 00:09:30.568 "num_base_bdevs": 3, 00:09:30.568 "num_base_bdevs_discovered": 3, 00:09:30.568 "num_base_bdevs_operational": 3, 00:09:30.568 "base_bdevs_list": [ 00:09:30.568 { 00:09:30.568 "name": "BaseBdev1", 00:09:30.568 "uuid": "421488a7-705e-5019-8bec-38ff96e44831", 00:09:30.568 "is_configured": true, 00:09:30.568 "data_offset": 2048, 00:09:30.568 "data_size": 63488 00:09:30.568 }, 00:09:30.568 { 00:09:30.568 "name": "BaseBdev2", 00:09:30.568 "uuid": "428712a3-647a-5f2a-97e3-712efc6baae6", 00:09:30.568 "is_configured": true, 00:09:30.568 "data_offset": 2048, 00:09:30.568 "data_size": 63488 00:09:30.568 }, 00:09:30.568 { 00:09:30.568 "name": "BaseBdev3", 00:09:30.568 "uuid": "b4d315af-cf49-5cba-b25a-54ff9993e867", 00:09:30.568 "is_configured": true, 00:09:30.568 "data_offset": 2048, 00:09:30.568 "data_size": 63488 00:09:30.568 } 00:09:30.568 ] 00:09:30.568 }' 00:09:30.568 15:17:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:30.568 15:17:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.827 15:17:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:30.828 15:17:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.828 15:17:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.828 [2024-11-20 15:17:17.189883] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:30.828 [2024-11-20 15:17:17.189913] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:30.828 [2024-11-20 15:17:17.192924] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:30.828 [2024-11-20 15:17:17.193129] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:30.828 [2024-11-20 15:17:17.193189] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:30.828 [2024-11-20 15:17:17.193206] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:09:30.828 { 00:09:30.828 "results": [ 00:09:30.828 { 00:09:30.828 "job": "raid_bdev1", 00:09:30.828 "core_mask": "0x1", 00:09:30.828 "workload": "randrw", 00:09:30.828 "percentage": 50, 00:09:30.828 "status": "finished", 00:09:30.828 "queue_depth": 1, 00:09:30.828 "io_size": 131072, 00:09:30.828 "runtime": 1.350925, 00:09:30.828 "iops": 16385.069489423913, 00:09:30.828 "mibps": 2048.133686177989, 00:09:30.828 "io_failed": 1, 00:09:30.828 "io_timeout": 0, 00:09:30.828 "avg_latency_us": 84.1127563379648, 00:09:30.828 "min_latency_us": 26.936546184738955, 00:09:30.828 "max_latency_us": 1381.7831325301204 00:09:30.828 } 00:09:30.828 ], 00:09:30.828 "core_count": 1 00:09:30.828 } 00:09:30.828 15:17:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.828 15:17:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 67119 00:09:30.828 15:17:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 67119 ']' 00:09:30.828 15:17:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 67119 00:09:30.828 15:17:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:09:30.828 15:17:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:30.828 15:17:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67119 00:09:30.828 killing process with pid 67119 00:09:30.828 15:17:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:30.828 15:17:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:30.828 15:17:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67119' 00:09:30.828 15:17:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 67119 00:09:30.828 [2024-11-20 15:17:17.229422] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:30.828 15:17:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 67119 00:09:31.087 [2024-11-20 15:17:17.462000] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:32.465 15:17:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:32.465 15:17:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.P6iY7wtNWk 00:09:32.465 15:17:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:32.465 ************************************ 00:09:32.465 END TEST raid_write_error_test 00:09:32.465 ************************************ 00:09:32.465 15:17:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:09:32.465 15:17:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:09:32.465 15:17:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:32.465 15:17:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:32.465 15:17:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:09:32.465 00:09:32.465 real 0m4.532s 00:09:32.465 user 0m5.322s 00:09:32.465 sys 0m0.611s 00:09:32.465 15:17:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:32.465 15:17:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.465 15:17:18 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:09:32.465 15:17:18 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:09:32.465 15:17:18 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:32.465 15:17:18 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:32.465 15:17:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:32.465 ************************************ 00:09:32.465 START TEST raid_state_function_test 00:09:32.465 ************************************ 00:09:32.465 15:17:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 3 false 00:09:32.465 15:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:09:32.465 15:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:09:32.465 15:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:09:32.465 15:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:32.465 15:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:32.465 15:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:32.465 15:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:32.465 15:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:32.465 15:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:32.465 15:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:32.465 15:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:32.465 15:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:32.465 15:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:32.465 15:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:32.465 15:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:32.465 15:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:32.465 15:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:32.465 15:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:32.465 15:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:32.465 15:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:32.465 15:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:32.465 15:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:09:32.465 15:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:09:32.465 15:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:09:32.465 15:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:09:32.465 Process raid pid: 67258 00:09:32.465 15:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=67258 00:09:32.465 15:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:32.465 15:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 67258' 00:09:32.465 15:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 67258 00:09:32.465 15:17:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 67258 ']' 00:09:32.465 15:17:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:32.465 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:32.465 15:17:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:32.465 15:17:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:32.465 15:17:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:32.465 15:17:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.465 [2024-11-20 15:17:18.845675] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:09:32.465 [2024-11-20 15:17:18.845976] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:32.724 [2024-11-20 15:17:19.017733] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:32.724 [2024-11-20 15:17:19.136550] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:32.983 [2024-11-20 15:17:19.347434] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:32.983 [2024-11-20 15:17:19.347682] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:33.242 15:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:33.242 15:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:09:33.242 15:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:33.242 15:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.242 15:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.242 [2024-11-20 15:17:19.690215] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:33.242 [2024-11-20 15:17:19.690412] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:33.242 [2024-11-20 15:17:19.690437] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:33.242 [2024-11-20 15:17:19.690451] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:33.242 [2024-11-20 15:17:19.690459] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:33.242 [2024-11-20 15:17:19.690472] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:33.242 15:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.242 15:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:33.242 15:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:33.242 15:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:33.242 15:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:33.242 15:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:33.242 15:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:33.242 15:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:33.242 15:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:33.242 15:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:33.242 15:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:33.242 15:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:33.243 15:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:33.243 15:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.243 15:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.501 15:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.501 15:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:33.501 "name": "Existed_Raid", 00:09:33.501 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:33.501 "strip_size_kb": 0, 00:09:33.501 "state": "configuring", 00:09:33.501 "raid_level": "raid1", 00:09:33.501 "superblock": false, 00:09:33.501 "num_base_bdevs": 3, 00:09:33.501 "num_base_bdevs_discovered": 0, 00:09:33.501 "num_base_bdevs_operational": 3, 00:09:33.501 "base_bdevs_list": [ 00:09:33.501 { 00:09:33.501 "name": "BaseBdev1", 00:09:33.501 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:33.501 "is_configured": false, 00:09:33.501 "data_offset": 0, 00:09:33.501 "data_size": 0 00:09:33.501 }, 00:09:33.501 { 00:09:33.501 "name": "BaseBdev2", 00:09:33.501 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:33.501 "is_configured": false, 00:09:33.501 "data_offset": 0, 00:09:33.501 "data_size": 0 00:09:33.502 }, 00:09:33.502 { 00:09:33.502 "name": "BaseBdev3", 00:09:33.502 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:33.502 "is_configured": false, 00:09:33.502 "data_offset": 0, 00:09:33.502 "data_size": 0 00:09:33.502 } 00:09:33.502 ] 00:09:33.502 }' 00:09:33.502 15:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:33.502 15:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.760 15:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:33.760 15:17:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.760 15:17:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.760 [2024-11-20 15:17:20.141554] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:33.760 [2024-11-20 15:17:20.141592] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:33.760 15:17:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.760 15:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:33.760 15:17:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.760 15:17:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.760 [2024-11-20 15:17:20.153514] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:33.760 [2024-11-20 15:17:20.153564] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:33.760 [2024-11-20 15:17:20.153574] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:33.760 [2024-11-20 15:17:20.153587] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:33.760 [2024-11-20 15:17:20.153594] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:33.760 [2024-11-20 15:17:20.153607] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:33.760 15:17:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.760 15:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:33.760 15:17:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.760 15:17:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.760 [2024-11-20 15:17:20.204120] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:33.760 BaseBdev1 00:09:33.760 15:17:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.760 15:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:33.760 15:17:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:33.760 15:17:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:33.760 15:17:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:33.760 15:17:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:33.760 15:17:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:33.760 15:17:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:33.760 15:17:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.760 15:17:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.760 15:17:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.760 15:17:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:33.760 15:17:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.761 15:17:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.761 [ 00:09:33.761 { 00:09:33.761 "name": "BaseBdev1", 00:09:33.761 "aliases": [ 00:09:33.761 "3e3a6978-8dfe-45eb-9ecb-0a90809f4597" 00:09:33.761 ], 00:09:33.761 "product_name": "Malloc disk", 00:09:33.761 "block_size": 512, 00:09:33.761 "num_blocks": 65536, 00:09:33.761 "uuid": "3e3a6978-8dfe-45eb-9ecb-0a90809f4597", 00:09:33.761 "assigned_rate_limits": { 00:09:33.761 "rw_ios_per_sec": 0, 00:09:33.761 "rw_mbytes_per_sec": 0, 00:09:33.761 "r_mbytes_per_sec": 0, 00:09:33.761 "w_mbytes_per_sec": 0 00:09:33.761 }, 00:09:33.761 "claimed": true, 00:09:33.761 "claim_type": "exclusive_write", 00:09:33.761 "zoned": false, 00:09:33.761 "supported_io_types": { 00:09:33.761 "read": true, 00:09:33.761 "write": true, 00:09:33.761 "unmap": true, 00:09:33.761 "flush": true, 00:09:33.761 "reset": true, 00:09:34.019 "nvme_admin": false, 00:09:34.019 "nvme_io": false, 00:09:34.019 "nvme_io_md": false, 00:09:34.019 "write_zeroes": true, 00:09:34.019 "zcopy": true, 00:09:34.019 "get_zone_info": false, 00:09:34.019 "zone_management": false, 00:09:34.019 "zone_append": false, 00:09:34.019 "compare": false, 00:09:34.019 "compare_and_write": false, 00:09:34.019 "abort": true, 00:09:34.019 "seek_hole": false, 00:09:34.019 "seek_data": false, 00:09:34.019 "copy": true, 00:09:34.019 "nvme_iov_md": false 00:09:34.019 }, 00:09:34.019 "memory_domains": [ 00:09:34.019 { 00:09:34.019 "dma_device_id": "system", 00:09:34.019 "dma_device_type": 1 00:09:34.019 }, 00:09:34.019 { 00:09:34.019 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:34.019 "dma_device_type": 2 00:09:34.019 } 00:09:34.019 ], 00:09:34.019 "driver_specific": {} 00:09:34.019 } 00:09:34.019 ] 00:09:34.019 15:17:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.019 15:17:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:34.019 15:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:34.019 15:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:34.019 15:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:34.019 15:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:34.019 15:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:34.019 15:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:34.019 15:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:34.019 15:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:34.019 15:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:34.019 15:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:34.019 15:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:34.019 15:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:34.019 15:17:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.019 15:17:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.019 15:17:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.019 15:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:34.019 "name": "Existed_Raid", 00:09:34.019 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:34.019 "strip_size_kb": 0, 00:09:34.019 "state": "configuring", 00:09:34.019 "raid_level": "raid1", 00:09:34.019 "superblock": false, 00:09:34.019 "num_base_bdevs": 3, 00:09:34.019 "num_base_bdevs_discovered": 1, 00:09:34.019 "num_base_bdevs_operational": 3, 00:09:34.019 "base_bdevs_list": [ 00:09:34.019 { 00:09:34.019 "name": "BaseBdev1", 00:09:34.019 "uuid": "3e3a6978-8dfe-45eb-9ecb-0a90809f4597", 00:09:34.019 "is_configured": true, 00:09:34.019 "data_offset": 0, 00:09:34.019 "data_size": 65536 00:09:34.019 }, 00:09:34.019 { 00:09:34.019 "name": "BaseBdev2", 00:09:34.019 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:34.019 "is_configured": false, 00:09:34.019 "data_offset": 0, 00:09:34.019 "data_size": 0 00:09:34.019 }, 00:09:34.019 { 00:09:34.019 "name": "BaseBdev3", 00:09:34.019 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:34.019 "is_configured": false, 00:09:34.019 "data_offset": 0, 00:09:34.019 "data_size": 0 00:09:34.019 } 00:09:34.019 ] 00:09:34.019 }' 00:09:34.019 15:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:34.019 15:17:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.278 15:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:34.278 15:17:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.278 15:17:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.278 [2024-11-20 15:17:20.687535] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:34.278 [2024-11-20 15:17:20.687590] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:34.278 15:17:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.278 15:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:34.278 15:17:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.278 15:17:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.278 [2024-11-20 15:17:20.699559] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:34.278 [2024-11-20 15:17:20.701691] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:34.278 [2024-11-20 15:17:20.701853] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:34.278 [2024-11-20 15:17:20.701876] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:34.278 [2024-11-20 15:17:20.701890] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:34.278 15:17:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.278 15:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:34.278 15:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:34.278 15:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:34.278 15:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:34.279 15:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:34.279 15:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:34.279 15:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:34.279 15:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:34.279 15:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:34.279 15:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:34.279 15:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:34.279 15:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:34.279 15:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:34.279 15:17:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.279 15:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:34.279 15:17:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.279 15:17:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.279 15:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:34.279 "name": "Existed_Raid", 00:09:34.279 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:34.279 "strip_size_kb": 0, 00:09:34.279 "state": "configuring", 00:09:34.279 "raid_level": "raid1", 00:09:34.279 "superblock": false, 00:09:34.279 "num_base_bdevs": 3, 00:09:34.279 "num_base_bdevs_discovered": 1, 00:09:34.279 "num_base_bdevs_operational": 3, 00:09:34.279 "base_bdevs_list": [ 00:09:34.279 { 00:09:34.279 "name": "BaseBdev1", 00:09:34.279 "uuid": "3e3a6978-8dfe-45eb-9ecb-0a90809f4597", 00:09:34.279 "is_configured": true, 00:09:34.279 "data_offset": 0, 00:09:34.279 "data_size": 65536 00:09:34.279 }, 00:09:34.279 { 00:09:34.279 "name": "BaseBdev2", 00:09:34.279 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:34.279 "is_configured": false, 00:09:34.279 "data_offset": 0, 00:09:34.279 "data_size": 0 00:09:34.279 }, 00:09:34.279 { 00:09:34.279 "name": "BaseBdev3", 00:09:34.279 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:34.279 "is_configured": false, 00:09:34.279 "data_offset": 0, 00:09:34.279 "data_size": 0 00:09:34.279 } 00:09:34.279 ] 00:09:34.279 }' 00:09:34.279 15:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:34.279 15:17:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.877 15:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:34.877 15:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.877 15:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.877 [2024-11-20 15:17:21.157455] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:34.877 BaseBdev2 00:09:34.877 15:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.877 15:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:34.877 15:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:34.877 15:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:34.877 15:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:34.877 15:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:34.877 15:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:34.877 15:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:34.877 15:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.877 15:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.877 15:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.877 15:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:34.877 15:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.877 15:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.877 [ 00:09:34.877 { 00:09:34.877 "name": "BaseBdev2", 00:09:34.877 "aliases": [ 00:09:34.877 "9b75fc77-c607-4ecb-8993-2551426ca048" 00:09:34.877 ], 00:09:34.877 "product_name": "Malloc disk", 00:09:34.877 "block_size": 512, 00:09:34.877 "num_blocks": 65536, 00:09:34.877 "uuid": "9b75fc77-c607-4ecb-8993-2551426ca048", 00:09:34.877 "assigned_rate_limits": { 00:09:34.877 "rw_ios_per_sec": 0, 00:09:34.877 "rw_mbytes_per_sec": 0, 00:09:34.877 "r_mbytes_per_sec": 0, 00:09:34.877 "w_mbytes_per_sec": 0 00:09:34.877 }, 00:09:34.877 "claimed": true, 00:09:34.877 "claim_type": "exclusive_write", 00:09:34.877 "zoned": false, 00:09:34.877 "supported_io_types": { 00:09:34.877 "read": true, 00:09:34.877 "write": true, 00:09:34.877 "unmap": true, 00:09:34.877 "flush": true, 00:09:34.877 "reset": true, 00:09:34.877 "nvme_admin": false, 00:09:34.877 "nvme_io": false, 00:09:34.877 "nvme_io_md": false, 00:09:34.877 "write_zeroes": true, 00:09:34.877 "zcopy": true, 00:09:34.877 "get_zone_info": false, 00:09:34.877 "zone_management": false, 00:09:34.877 "zone_append": false, 00:09:34.877 "compare": false, 00:09:34.877 "compare_and_write": false, 00:09:34.877 "abort": true, 00:09:34.877 "seek_hole": false, 00:09:34.877 "seek_data": false, 00:09:34.877 "copy": true, 00:09:34.877 "nvme_iov_md": false 00:09:34.877 }, 00:09:34.877 "memory_domains": [ 00:09:34.877 { 00:09:34.877 "dma_device_id": "system", 00:09:34.877 "dma_device_type": 1 00:09:34.877 }, 00:09:34.877 { 00:09:34.877 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:34.877 "dma_device_type": 2 00:09:34.877 } 00:09:34.877 ], 00:09:34.877 "driver_specific": {} 00:09:34.877 } 00:09:34.877 ] 00:09:34.877 15:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.877 15:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:34.877 15:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:34.877 15:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:34.877 15:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:34.877 15:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:34.877 15:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:34.877 15:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:34.877 15:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:34.877 15:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:34.877 15:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:34.877 15:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:34.877 15:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:34.877 15:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:34.877 15:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:34.877 15:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.877 15:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.877 15:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:34.877 15:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.877 15:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:34.877 "name": "Existed_Raid", 00:09:34.877 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:34.877 "strip_size_kb": 0, 00:09:34.877 "state": "configuring", 00:09:34.877 "raid_level": "raid1", 00:09:34.877 "superblock": false, 00:09:34.877 "num_base_bdevs": 3, 00:09:34.877 "num_base_bdevs_discovered": 2, 00:09:34.877 "num_base_bdevs_operational": 3, 00:09:34.877 "base_bdevs_list": [ 00:09:34.877 { 00:09:34.877 "name": "BaseBdev1", 00:09:34.877 "uuid": "3e3a6978-8dfe-45eb-9ecb-0a90809f4597", 00:09:34.877 "is_configured": true, 00:09:34.877 "data_offset": 0, 00:09:34.877 "data_size": 65536 00:09:34.877 }, 00:09:34.877 { 00:09:34.877 "name": "BaseBdev2", 00:09:34.877 "uuid": "9b75fc77-c607-4ecb-8993-2551426ca048", 00:09:34.877 "is_configured": true, 00:09:34.877 "data_offset": 0, 00:09:34.877 "data_size": 65536 00:09:34.877 }, 00:09:34.877 { 00:09:34.877 "name": "BaseBdev3", 00:09:34.877 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:34.877 "is_configured": false, 00:09:34.877 "data_offset": 0, 00:09:34.877 "data_size": 0 00:09:34.877 } 00:09:34.877 ] 00:09:34.877 }' 00:09:34.877 15:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:34.877 15:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.446 15:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:35.446 15:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.446 15:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.446 [2024-11-20 15:17:21.676161] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:35.446 [2024-11-20 15:17:21.676422] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:35.446 [2024-11-20 15:17:21.676453] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:09:35.446 [2024-11-20 15:17:21.676790] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:35.446 [2024-11-20 15:17:21.676972] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:35.446 [2024-11-20 15:17:21.676983] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:35.446 [2024-11-20 15:17:21.677265] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:35.446 BaseBdev3 00:09:35.446 15:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.446 15:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:35.446 15:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:35.446 15:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:35.446 15:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:35.446 15:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:35.446 15:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:35.446 15:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:35.446 15:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.446 15:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.446 15:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.446 15:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:35.446 15:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.446 15:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.446 [ 00:09:35.446 { 00:09:35.446 "name": "BaseBdev3", 00:09:35.446 "aliases": [ 00:09:35.446 "24e54314-13ea-42a9-9a9d-01fea78a7402" 00:09:35.446 ], 00:09:35.446 "product_name": "Malloc disk", 00:09:35.446 "block_size": 512, 00:09:35.446 "num_blocks": 65536, 00:09:35.446 "uuid": "24e54314-13ea-42a9-9a9d-01fea78a7402", 00:09:35.446 "assigned_rate_limits": { 00:09:35.446 "rw_ios_per_sec": 0, 00:09:35.446 "rw_mbytes_per_sec": 0, 00:09:35.446 "r_mbytes_per_sec": 0, 00:09:35.446 "w_mbytes_per_sec": 0 00:09:35.446 }, 00:09:35.446 "claimed": true, 00:09:35.446 "claim_type": "exclusive_write", 00:09:35.446 "zoned": false, 00:09:35.446 "supported_io_types": { 00:09:35.446 "read": true, 00:09:35.446 "write": true, 00:09:35.446 "unmap": true, 00:09:35.446 "flush": true, 00:09:35.446 "reset": true, 00:09:35.446 "nvme_admin": false, 00:09:35.446 "nvme_io": false, 00:09:35.446 "nvme_io_md": false, 00:09:35.446 "write_zeroes": true, 00:09:35.446 "zcopy": true, 00:09:35.446 "get_zone_info": false, 00:09:35.446 "zone_management": false, 00:09:35.446 "zone_append": false, 00:09:35.446 "compare": false, 00:09:35.446 "compare_and_write": false, 00:09:35.446 "abort": true, 00:09:35.446 "seek_hole": false, 00:09:35.446 "seek_data": false, 00:09:35.446 "copy": true, 00:09:35.446 "nvme_iov_md": false 00:09:35.446 }, 00:09:35.446 "memory_domains": [ 00:09:35.446 { 00:09:35.446 "dma_device_id": "system", 00:09:35.446 "dma_device_type": 1 00:09:35.446 }, 00:09:35.446 { 00:09:35.446 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:35.446 "dma_device_type": 2 00:09:35.446 } 00:09:35.446 ], 00:09:35.446 "driver_specific": {} 00:09:35.446 } 00:09:35.446 ] 00:09:35.446 15:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.446 15:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:35.446 15:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:35.446 15:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:35.446 15:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:09:35.446 15:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:35.446 15:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:35.446 15:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:35.446 15:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:35.447 15:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:35.447 15:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:35.447 15:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:35.447 15:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:35.447 15:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:35.447 15:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:35.447 15:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:35.447 15:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.447 15:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.447 15:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.447 15:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:35.447 "name": "Existed_Raid", 00:09:35.447 "uuid": "51121b83-4932-4d25-8b1c-8367ac65789d", 00:09:35.447 "strip_size_kb": 0, 00:09:35.447 "state": "online", 00:09:35.447 "raid_level": "raid1", 00:09:35.447 "superblock": false, 00:09:35.447 "num_base_bdevs": 3, 00:09:35.447 "num_base_bdevs_discovered": 3, 00:09:35.447 "num_base_bdevs_operational": 3, 00:09:35.447 "base_bdevs_list": [ 00:09:35.447 { 00:09:35.447 "name": "BaseBdev1", 00:09:35.447 "uuid": "3e3a6978-8dfe-45eb-9ecb-0a90809f4597", 00:09:35.447 "is_configured": true, 00:09:35.447 "data_offset": 0, 00:09:35.447 "data_size": 65536 00:09:35.447 }, 00:09:35.447 { 00:09:35.447 "name": "BaseBdev2", 00:09:35.447 "uuid": "9b75fc77-c607-4ecb-8993-2551426ca048", 00:09:35.447 "is_configured": true, 00:09:35.447 "data_offset": 0, 00:09:35.447 "data_size": 65536 00:09:35.447 }, 00:09:35.447 { 00:09:35.447 "name": "BaseBdev3", 00:09:35.447 "uuid": "24e54314-13ea-42a9-9a9d-01fea78a7402", 00:09:35.447 "is_configured": true, 00:09:35.447 "data_offset": 0, 00:09:35.447 "data_size": 65536 00:09:35.447 } 00:09:35.447 ] 00:09:35.447 }' 00:09:35.447 15:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:35.447 15:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.706 15:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:35.706 15:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:35.706 15:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:35.706 15:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:35.706 15:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:35.706 15:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:35.706 15:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:35.706 15:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:35.706 15:17:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.706 15:17:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.706 [2024-11-20 15:17:22.131912] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:35.706 15:17:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.706 15:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:35.706 "name": "Existed_Raid", 00:09:35.706 "aliases": [ 00:09:35.706 "51121b83-4932-4d25-8b1c-8367ac65789d" 00:09:35.706 ], 00:09:35.706 "product_name": "Raid Volume", 00:09:35.706 "block_size": 512, 00:09:35.706 "num_blocks": 65536, 00:09:35.706 "uuid": "51121b83-4932-4d25-8b1c-8367ac65789d", 00:09:35.706 "assigned_rate_limits": { 00:09:35.706 "rw_ios_per_sec": 0, 00:09:35.706 "rw_mbytes_per_sec": 0, 00:09:35.706 "r_mbytes_per_sec": 0, 00:09:35.706 "w_mbytes_per_sec": 0 00:09:35.706 }, 00:09:35.706 "claimed": false, 00:09:35.706 "zoned": false, 00:09:35.706 "supported_io_types": { 00:09:35.706 "read": true, 00:09:35.706 "write": true, 00:09:35.706 "unmap": false, 00:09:35.706 "flush": false, 00:09:35.706 "reset": true, 00:09:35.706 "nvme_admin": false, 00:09:35.706 "nvme_io": false, 00:09:35.706 "nvme_io_md": false, 00:09:35.706 "write_zeroes": true, 00:09:35.706 "zcopy": false, 00:09:35.706 "get_zone_info": false, 00:09:35.706 "zone_management": false, 00:09:35.706 "zone_append": false, 00:09:35.706 "compare": false, 00:09:35.706 "compare_and_write": false, 00:09:35.706 "abort": false, 00:09:35.706 "seek_hole": false, 00:09:35.706 "seek_data": false, 00:09:35.706 "copy": false, 00:09:35.706 "nvme_iov_md": false 00:09:35.706 }, 00:09:35.706 "memory_domains": [ 00:09:35.706 { 00:09:35.706 "dma_device_id": "system", 00:09:35.706 "dma_device_type": 1 00:09:35.706 }, 00:09:35.706 { 00:09:35.706 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:35.706 "dma_device_type": 2 00:09:35.706 }, 00:09:35.706 { 00:09:35.706 "dma_device_id": "system", 00:09:35.706 "dma_device_type": 1 00:09:35.706 }, 00:09:35.706 { 00:09:35.706 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:35.706 "dma_device_type": 2 00:09:35.706 }, 00:09:35.706 { 00:09:35.706 "dma_device_id": "system", 00:09:35.706 "dma_device_type": 1 00:09:35.706 }, 00:09:35.706 { 00:09:35.706 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:35.706 "dma_device_type": 2 00:09:35.706 } 00:09:35.706 ], 00:09:35.706 "driver_specific": { 00:09:35.706 "raid": { 00:09:35.706 "uuid": "51121b83-4932-4d25-8b1c-8367ac65789d", 00:09:35.706 "strip_size_kb": 0, 00:09:35.706 "state": "online", 00:09:35.706 "raid_level": "raid1", 00:09:35.706 "superblock": false, 00:09:35.706 "num_base_bdevs": 3, 00:09:35.706 "num_base_bdevs_discovered": 3, 00:09:35.706 "num_base_bdevs_operational": 3, 00:09:35.706 "base_bdevs_list": [ 00:09:35.706 { 00:09:35.706 "name": "BaseBdev1", 00:09:35.706 "uuid": "3e3a6978-8dfe-45eb-9ecb-0a90809f4597", 00:09:35.706 "is_configured": true, 00:09:35.706 "data_offset": 0, 00:09:35.706 "data_size": 65536 00:09:35.706 }, 00:09:35.706 { 00:09:35.706 "name": "BaseBdev2", 00:09:35.706 "uuid": "9b75fc77-c607-4ecb-8993-2551426ca048", 00:09:35.706 "is_configured": true, 00:09:35.706 "data_offset": 0, 00:09:35.706 "data_size": 65536 00:09:35.706 }, 00:09:35.706 { 00:09:35.706 "name": "BaseBdev3", 00:09:35.706 "uuid": "24e54314-13ea-42a9-9a9d-01fea78a7402", 00:09:35.706 "is_configured": true, 00:09:35.706 "data_offset": 0, 00:09:35.706 "data_size": 65536 00:09:35.706 } 00:09:35.706 ] 00:09:35.706 } 00:09:35.706 } 00:09:35.706 }' 00:09:35.706 15:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:35.965 15:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:35.965 BaseBdev2 00:09:35.965 BaseBdev3' 00:09:35.965 15:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:35.965 15:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:35.965 15:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:35.965 15:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:35.965 15:17:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.965 15:17:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.965 15:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:35.965 15:17:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.965 15:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:35.965 15:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:35.965 15:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:35.965 15:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:35.965 15:17:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.965 15:17:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.965 15:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:35.965 15:17:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.965 15:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:35.965 15:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:35.965 15:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:35.965 15:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:35.965 15:17:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.965 15:17:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.965 15:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:35.965 15:17:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.965 15:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:35.965 15:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:35.965 15:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:35.965 15:17:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.965 15:17:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.965 [2024-11-20 15:17:22.411239] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:36.224 15:17:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.224 15:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:36.224 15:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:09:36.224 15:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:36.224 15:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:36.224 15:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:09:36.224 15:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:09:36.224 15:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:36.224 15:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:36.224 15:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:36.224 15:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:36.224 15:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:36.224 15:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:36.224 15:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:36.224 15:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:36.224 15:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:36.224 15:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:36.224 15:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:36.224 15:17:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.224 15:17:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.224 15:17:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.224 15:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:36.224 "name": "Existed_Raid", 00:09:36.224 "uuid": "51121b83-4932-4d25-8b1c-8367ac65789d", 00:09:36.224 "strip_size_kb": 0, 00:09:36.224 "state": "online", 00:09:36.224 "raid_level": "raid1", 00:09:36.224 "superblock": false, 00:09:36.224 "num_base_bdevs": 3, 00:09:36.224 "num_base_bdevs_discovered": 2, 00:09:36.224 "num_base_bdevs_operational": 2, 00:09:36.224 "base_bdevs_list": [ 00:09:36.224 { 00:09:36.224 "name": null, 00:09:36.224 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:36.224 "is_configured": false, 00:09:36.224 "data_offset": 0, 00:09:36.224 "data_size": 65536 00:09:36.224 }, 00:09:36.224 { 00:09:36.224 "name": "BaseBdev2", 00:09:36.224 "uuid": "9b75fc77-c607-4ecb-8993-2551426ca048", 00:09:36.224 "is_configured": true, 00:09:36.224 "data_offset": 0, 00:09:36.224 "data_size": 65536 00:09:36.224 }, 00:09:36.224 { 00:09:36.224 "name": "BaseBdev3", 00:09:36.224 "uuid": "24e54314-13ea-42a9-9a9d-01fea78a7402", 00:09:36.224 "is_configured": true, 00:09:36.224 "data_offset": 0, 00:09:36.224 "data_size": 65536 00:09:36.224 } 00:09:36.224 ] 00:09:36.224 }' 00:09:36.224 15:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:36.224 15:17:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.483 15:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:36.483 15:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:36.483 15:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:36.483 15:17:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.483 15:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:36.483 15:17:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.483 15:17:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.741 15:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:36.741 15:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:36.741 15:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:36.741 15:17:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.741 15:17:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.741 [2024-11-20 15:17:22.984304] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:36.741 15:17:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.741 15:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:36.741 15:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:36.741 15:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:36.741 15:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:36.741 15:17:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.741 15:17:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.741 15:17:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.741 15:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:36.741 15:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:36.741 15:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:36.741 15:17:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.741 15:17:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.741 [2024-11-20 15:17:23.137111] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:36.741 [2024-11-20 15:17:23.137213] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:37.000 [2024-11-20 15:17:23.234617] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:37.000 [2024-11-20 15:17:23.234868] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:37.000 [2024-11-20 15:17:23.235009] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:37.000 15:17:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.000 15:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:37.000 15:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:37.000 15:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:37.000 15:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:37.000 15:17:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.000 15:17:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.000 15:17:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.000 15:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:37.000 15:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:37.000 15:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:09:37.000 15:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:37.000 15:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:37.000 15:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:37.000 15:17:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.000 15:17:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.001 BaseBdev2 00:09:37.001 15:17:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.001 15:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:37.001 15:17:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:37.001 15:17:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:37.001 15:17:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:37.001 15:17:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:37.001 15:17:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:37.001 15:17:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:37.001 15:17:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.001 15:17:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.001 15:17:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.001 15:17:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:37.001 15:17:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.001 15:17:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.001 [ 00:09:37.001 { 00:09:37.001 "name": "BaseBdev2", 00:09:37.001 "aliases": [ 00:09:37.001 "78f4501e-e027-4172-862e-04a2aab0f9f6" 00:09:37.001 ], 00:09:37.001 "product_name": "Malloc disk", 00:09:37.001 "block_size": 512, 00:09:37.001 "num_blocks": 65536, 00:09:37.001 "uuid": "78f4501e-e027-4172-862e-04a2aab0f9f6", 00:09:37.001 "assigned_rate_limits": { 00:09:37.001 "rw_ios_per_sec": 0, 00:09:37.001 "rw_mbytes_per_sec": 0, 00:09:37.001 "r_mbytes_per_sec": 0, 00:09:37.001 "w_mbytes_per_sec": 0 00:09:37.001 }, 00:09:37.001 "claimed": false, 00:09:37.001 "zoned": false, 00:09:37.001 "supported_io_types": { 00:09:37.001 "read": true, 00:09:37.001 "write": true, 00:09:37.001 "unmap": true, 00:09:37.001 "flush": true, 00:09:37.001 "reset": true, 00:09:37.001 "nvme_admin": false, 00:09:37.001 "nvme_io": false, 00:09:37.001 "nvme_io_md": false, 00:09:37.001 "write_zeroes": true, 00:09:37.001 "zcopy": true, 00:09:37.001 "get_zone_info": false, 00:09:37.001 "zone_management": false, 00:09:37.001 "zone_append": false, 00:09:37.001 "compare": false, 00:09:37.001 "compare_and_write": false, 00:09:37.001 "abort": true, 00:09:37.001 "seek_hole": false, 00:09:37.001 "seek_data": false, 00:09:37.001 "copy": true, 00:09:37.001 "nvme_iov_md": false 00:09:37.001 }, 00:09:37.001 "memory_domains": [ 00:09:37.001 { 00:09:37.001 "dma_device_id": "system", 00:09:37.001 "dma_device_type": 1 00:09:37.001 }, 00:09:37.001 { 00:09:37.001 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:37.001 "dma_device_type": 2 00:09:37.001 } 00:09:37.001 ], 00:09:37.001 "driver_specific": {} 00:09:37.001 } 00:09:37.001 ] 00:09:37.001 15:17:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.001 15:17:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:37.001 15:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:37.001 15:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:37.001 15:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:37.001 15:17:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.001 15:17:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.001 BaseBdev3 00:09:37.001 15:17:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.001 15:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:37.001 15:17:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:37.001 15:17:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:37.001 15:17:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:37.001 15:17:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:37.001 15:17:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:37.001 15:17:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:37.001 15:17:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.001 15:17:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.001 15:17:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.001 15:17:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:37.001 15:17:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.001 15:17:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.001 [ 00:09:37.001 { 00:09:37.001 "name": "BaseBdev3", 00:09:37.001 "aliases": [ 00:09:37.001 "a14227ab-0e7b-4260-a0c6-a79f1cec4c2b" 00:09:37.001 ], 00:09:37.001 "product_name": "Malloc disk", 00:09:37.001 "block_size": 512, 00:09:37.001 "num_blocks": 65536, 00:09:37.001 "uuid": "a14227ab-0e7b-4260-a0c6-a79f1cec4c2b", 00:09:37.001 "assigned_rate_limits": { 00:09:37.001 "rw_ios_per_sec": 0, 00:09:37.001 "rw_mbytes_per_sec": 0, 00:09:37.001 "r_mbytes_per_sec": 0, 00:09:37.001 "w_mbytes_per_sec": 0 00:09:37.001 }, 00:09:37.001 "claimed": false, 00:09:37.001 "zoned": false, 00:09:37.001 "supported_io_types": { 00:09:37.001 "read": true, 00:09:37.001 "write": true, 00:09:37.001 "unmap": true, 00:09:37.001 "flush": true, 00:09:37.001 "reset": true, 00:09:37.001 "nvme_admin": false, 00:09:37.001 "nvme_io": false, 00:09:37.001 "nvme_io_md": false, 00:09:37.001 "write_zeroes": true, 00:09:37.001 "zcopy": true, 00:09:37.001 "get_zone_info": false, 00:09:37.001 "zone_management": false, 00:09:37.001 "zone_append": false, 00:09:37.001 "compare": false, 00:09:37.001 "compare_and_write": false, 00:09:37.001 "abort": true, 00:09:37.001 "seek_hole": false, 00:09:37.001 "seek_data": false, 00:09:37.001 "copy": true, 00:09:37.001 "nvme_iov_md": false 00:09:37.001 }, 00:09:37.001 "memory_domains": [ 00:09:37.001 { 00:09:37.001 "dma_device_id": "system", 00:09:37.001 "dma_device_type": 1 00:09:37.001 }, 00:09:37.001 { 00:09:37.001 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:37.001 "dma_device_type": 2 00:09:37.001 } 00:09:37.001 ], 00:09:37.001 "driver_specific": {} 00:09:37.001 } 00:09:37.001 ] 00:09:37.001 15:17:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.001 15:17:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:37.001 15:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:37.001 15:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:37.001 15:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:37.001 15:17:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.001 15:17:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.001 [2024-11-20 15:17:23.462446] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:37.001 [2024-11-20 15:17:23.462618] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:37.001 [2024-11-20 15:17:23.462651] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:37.001 [2024-11-20 15:17:23.464706] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:37.001 15:17:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.001 15:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:37.001 15:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:37.001 15:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:37.001 15:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:37.001 15:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:37.001 15:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:37.001 15:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:37.001 15:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:37.001 15:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:37.001 15:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:37.001 15:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:37.001 15:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:37.002 15:17:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.002 15:17:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.260 15:17:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.260 15:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:37.260 "name": "Existed_Raid", 00:09:37.260 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:37.260 "strip_size_kb": 0, 00:09:37.260 "state": "configuring", 00:09:37.260 "raid_level": "raid1", 00:09:37.260 "superblock": false, 00:09:37.260 "num_base_bdevs": 3, 00:09:37.260 "num_base_bdevs_discovered": 2, 00:09:37.260 "num_base_bdevs_operational": 3, 00:09:37.260 "base_bdevs_list": [ 00:09:37.260 { 00:09:37.260 "name": "BaseBdev1", 00:09:37.260 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:37.260 "is_configured": false, 00:09:37.260 "data_offset": 0, 00:09:37.260 "data_size": 0 00:09:37.260 }, 00:09:37.260 { 00:09:37.260 "name": "BaseBdev2", 00:09:37.260 "uuid": "78f4501e-e027-4172-862e-04a2aab0f9f6", 00:09:37.260 "is_configured": true, 00:09:37.260 "data_offset": 0, 00:09:37.260 "data_size": 65536 00:09:37.260 }, 00:09:37.260 { 00:09:37.260 "name": "BaseBdev3", 00:09:37.260 "uuid": "a14227ab-0e7b-4260-a0c6-a79f1cec4c2b", 00:09:37.260 "is_configured": true, 00:09:37.260 "data_offset": 0, 00:09:37.260 "data_size": 65536 00:09:37.260 } 00:09:37.260 ] 00:09:37.260 }' 00:09:37.260 15:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:37.260 15:17:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.519 15:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:37.519 15:17:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.519 15:17:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.519 [2024-11-20 15:17:23.917826] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:37.519 15:17:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.519 15:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:37.519 15:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:37.519 15:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:37.519 15:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:37.519 15:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:37.519 15:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:37.519 15:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:37.519 15:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:37.519 15:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:37.519 15:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:37.519 15:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:37.519 15:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:37.519 15:17:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.519 15:17:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.519 15:17:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.519 15:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:37.519 "name": "Existed_Raid", 00:09:37.519 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:37.519 "strip_size_kb": 0, 00:09:37.519 "state": "configuring", 00:09:37.519 "raid_level": "raid1", 00:09:37.519 "superblock": false, 00:09:37.519 "num_base_bdevs": 3, 00:09:37.519 "num_base_bdevs_discovered": 1, 00:09:37.519 "num_base_bdevs_operational": 3, 00:09:37.519 "base_bdevs_list": [ 00:09:37.519 { 00:09:37.519 "name": "BaseBdev1", 00:09:37.519 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:37.519 "is_configured": false, 00:09:37.519 "data_offset": 0, 00:09:37.519 "data_size": 0 00:09:37.519 }, 00:09:37.519 { 00:09:37.519 "name": null, 00:09:37.519 "uuid": "78f4501e-e027-4172-862e-04a2aab0f9f6", 00:09:37.519 "is_configured": false, 00:09:37.519 "data_offset": 0, 00:09:37.519 "data_size": 65536 00:09:37.519 }, 00:09:37.519 { 00:09:37.519 "name": "BaseBdev3", 00:09:37.520 "uuid": "a14227ab-0e7b-4260-a0c6-a79f1cec4c2b", 00:09:37.520 "is_configured": true, 00:09:37.520 "data_offset": 0, 00:09:37.520 "data_size": 65536 00:09:37.520 } 00:09:37.520 ] 00:09:37.520 }' 00:09:37.520 15:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:37.520 15:17:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.089 15:17:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:38.089 15:17:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:38.089 15:17:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.089 15:17:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.089 15:17:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.089 15:17:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:38.089 15:17:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:38.089 15:17:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.089 15:17:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.089 [2024-11-20 15:17:24.426754] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:38.089 BaseBdev1 00:09:38.089 15:17:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.089 15:17:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:38.089 15:17:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:38.089 15:17:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:38.089 15:17:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:38.089 15:17:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:38.089 15:17:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:38.089 15:17:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:38.089 15:17:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.089 15:17:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.089 15:17:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.089 15:17:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:38.089 15:17:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.089 15:17:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.089 [ 00:09:38.089 { 00:09:38.089 "name": "BaseBdev1", 00:09:38.089 "aliases": [ 00:09:38.089 "04b00896-3135-4f9c-b56b-5b1a5f49527f" 00:09:38.089 ], 00:09:38.089 "product_name": "Malloc disk", 00:09:38.089 "block_size": 512, 00:09:38.089 "num_blocks": 65536, 00:09:38.089 "uuid": "04b00896-3135-4f9c-b56b-5b1a5f49527f", 00:09:38.089 "assigned_rate_limits": { 00:09:38.089 "rw_ios_per_sec": 0, 00:09:38.089 "rw_mbytes_per_sec": 0, 00:09:38.089 "r_mbytes_per_sec": 0, 00:09:38.089 "w_mbytes_per_sec": 0 00:09:38.089 }, 00:09:38.089 "claimed": true, 00:09:38.089 "claim_type": "exclusive_write", 00:09:38.089 "zoned": false, 00:09:38.089 "supported_io_types": { 00:09:38.089 "read": true, 00:09:38.089 "write": true, 00:09:38.089 "unmap": true, 00:09:38.089 "flush": true, 00:09:38.089 "reset": true, 00:09:38.089 "nvme_admin": false, 00:09:38.089 "nvme_io": false, 00:09:38.089 "nvme_io_md": false, 00:09:38.089 "write_zeroes": true, 00:09:38.089 "zcopy": true, 00:09:38.089 "get_zone_info": false, 00:09:38.089 "zone_management": false, 00:09:38.089 "zone_append": false, 00:09:38.089 "compare": false, 00:09:38.089 "compare_and_write": false, 00:09:38.089 "abort": true, 00:09:38.089 "seek_hole": false, 00:09:38.089 "seek_data": false, 00:09:38.089 "copy": true, 00:09:38.089 "nvme_iov_md": false 00:09:38.089 }, 00:09:38.089 "memory_domains": [ 00:09:38.089 { 00:09:38.089 "dma_device_id": "system", 00:09:38.089 "dma_device_type": 1 00:09:38.089 }, 00:09:38.089 { 00:09:38.089 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:38.089 "dma_device_type": 2 00:09:38.089 } 00:09:38.089 ], 00:09:38.089 "driver_specific": {} 00:09:38.089 } 00:09:38.089 ] 00:09:38.089 15:17:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.089 15:17:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:38.090 15:17:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:38.090 15:17:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:38.090 15:17:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:38.090 15:17:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:38.090 15:17:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:38.090 15:17:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:38.090 15:17:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:38.090 15:17:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:38.090 15:17:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:38.090 15:17:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:38.090 15:17:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:38.090 15:17:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:38.090 15:17:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.090 15:17:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.090 15:17:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.090 15:17:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:38.090 "name": "Existed_Raid", 00:09:38.090 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:38.090 "strip_size_kb": 0, 00:09:38.090 "state": "configuring", 00:09:38.090 "raid_level": "raid1", 00:09:38.090 "superblock": false, 00:09:38.090 "num_base_bdevs": 3, 00:09:38.090 "num_base_bdevs_discovered": 2, 00:09:38.090 "num_base_bdevs_operational": 3, 00:09:38.090 "base_bdevs_list": [ 00:09:38.090 { 00:09:38.090 "name": "BaseBdev1", 00:09:38.090 "uuid": "04b00896-3135-4f9c-b56b-5b1a5f49527f", 00:09:38.090 "is_configured": true, 00:09:38.090 "data_offset": 0, 00:09:38.090 "data_size": 65536 00:09:38.090 }, 00:09:38.090 { 00:09:38.090 "name": null, 00:09:38.090 "uuid": "78f4501e-e027-4172-862e-04a2aab0f9f6", 00:09:38.090 "is_configured": false, 00:09:38.090 "data_offset": 0, 00:09:38.090 "data_size": 65536 00:09:38.090 }, 00:09:38.090 { 00:09:38.090 "name": "BaseBdev3", 00:09:38.090 "uuid": "a14227ab-0e7b-4260-a0c6-a79f1cec4c2b", 00:09:38.090 "is_configured": true, 00:09:38.090 "data_offset": 0, 00:09:38.090 "data_size": 65536 00:09:38.090 } 00:09:38.090 ] 00:09:38.090 }' 00:09:38.090 15:17:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:38.090 15:17:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.658 15:17:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:38.658 15:17:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.658 15:17:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:38.658 15:17:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.658 15:17:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.658 15:17:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:38.658 15:17:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:38.658 15:17:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.658 15:17:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.658 [2024-11-20 15:17:24.918374] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:38.658 15:17:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.658 15:17:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:38.658 15:17:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:38.658 15:17:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:38.658 15:17:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:38.658 15:17:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:38.658 15:17:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:38.658 15:17:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:38.658 15:17:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:38.658 15:17:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:38.658 15:17:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:38.658 15:17:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:38.658 15:17:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:38.658 15:17:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.658 15:17:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.658 15:17:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.658 15:17:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:38.658 "name": "Existed_Raid", 00:09:38.658 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:38.658 "strip_size_kb": 0, 00:09:38.658 "state": "configuring", 00:09:38.658 "raid_level": "raid1", 00:09:38.658 "superblock": false, 00:09:38.658 "num_base_bdevs": 3, 00:09:38.658 "num_base_bdevs_discovered": 1, 00:09:38.658 "num_base_bdevs_operational": 3, 00:09:38.658 "base_bdevs_list": [ 00:09:38.658 { 00:09:38.658 "name": "BaseBdev1", 00:09:38.658 "uuid": "04b00896-3135-4f9c-b56b-5b1a5f49527f", 00:09:38.658 "is_configured": true, 00:09:38.658 "data_offset": 0, 00:09:38.658 "data_size": 65536 00:09:38.658 }, 00:09:38.658 { 00:09:38.658 "name": null, 00:09:38.658 "uuid": "78f4501e-e027-4172-862e-04a2aab0f9f6", 00:09:38.658 "is_configured": false, 00:09:38.658 "data_offset": 0, 00:09:38.658 "data_size": 65536 00:09:38.658 }, 00:09:38.658 { 00:09:38.658 "name": null, 00:09:38.658 "uuid": "a14227ab-0e7b-4260-a0c6-a79f1cec4c2b", 00:09:38.658 "is_configured": false, 00:09:38.658 "data_offset": 0, 00:09:38.658 "data_size": 65536 00:09:38.658 } 00:09:38.658 ] 00:09:38.658 }' 00:09:38.658 15:17:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:38.658 15:17:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.930 15:17:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:38.930 15:17:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.930 15:17:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.930 15:17:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:38.930 15:17:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.930 15:17:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:38.930 15:17:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:38.930 15:17:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.930 15:17:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.930 [2024-11-20 15:17:25.393831] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:39.197 15:17:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.197 15:17:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:39.197 15:17:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:39.197 15:17:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:39.197 15:17:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:39.198 15:17:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:39.198 15:17:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:39.198 15:17:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:39.198 15:17:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:39.198 15:17:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:39.198 15:17:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:39.198 15:17:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:39.198 15:17:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:39.198 15:17:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.198 15:17:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.198 15:17:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.198 15:17:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:39.198 "name": "Existed_Raid", 00:09:39.198 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:39.198 "strip_size_kb": 0, 00:09:39.198 "state": "configuring", 00:09:39.198 "raid_level": "raid1", 00:09:39.198 "superblock": false, 00:09:39.198 "num_base_bdevs": 3, 00:09:39.198 "num_base_bdevs_discovered": 2, 00:09:39.198 "num_base_bdevs_operational": 3, 00:09:39.198 "base_bdevs_list": [ 00:09:39.198 { 00:09:39.198 "name": "BaseBdev1", 00:09:39.198 "uuid": "04b00896-3135-4f9c-b56b-5b1a5f49527f", 00:09:39.198 "is_configured": true, 00:09:39.198 "data_offset": 0, 00:09:39.198 "data_size": 65536 00:09:39.198 }, 00:09:39.198 { 00:09:39.198 "name": null, 00:09:39.198 "uuid": "78f4501e-e027-4172-862e-04a2aab0f9f6", 00:09:39.198 "is_configured": false, 00:09:39.198 "data_offset": 0, 00:09:39.198 "data_size": 65536 00:09:39.198 }, 00:09:39.198 { 00:09:39.198 "name": "BaseBdev3", 00:09:39.198 "uuid": "a14227ab-0e7b-4260-a0c6-a79f1cec4c2b", 00:09:39.198 "is_configured": true, 00:09:39.198 "data_offset": 0, 00:09:39.198 "data_size": 65536 00:09:39.198 } 00:09:39.198 ] 00:09:39.198 }' 00:09:39.198 15:17:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:39.198 15:17:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.457 15:17:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:39.457 15:17:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:39.457 15:17:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.457 15:17:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.457 15:17:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.457 15:17:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:39.457 15:17:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:39.457 15:17:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.457 15:17:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.457 [2024-11-20 15:17:25.861828] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:39.716 15:17:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.716 15:17:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:39.716 15:17:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:39.716 15:17:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:39.716 15:17:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:39.716 15:17:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:39.716 15:17:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:39.716 15:17:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:39.716 15:17:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:39.716 15:17:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:39.716 15:17:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:39.716 15:17:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:39.716 15:17:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:39.716 15:17:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.716 15:17:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.716 15:17:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.716 15:17:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:39.716 "name": "Existed_Raid", 00:09:39.716 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:39.716 "strip_size_kb": 0, 00:09:39.716 "state": "configuring", 00:09:39.716 "raid_level": "raid1", 00:09:39.716 "superblock": false, 00:09:39.716 "num_base_bdevs": 3, 00:09:39.716 "num_base_bdevs_discovered": 1, 00:09:39.716 "num_base_bdevs_operational": 3, 00:09:39.716 "base_bdevs_list": [ 00:09:39.716 { 00:09:39.716 "name": null, 00:09:39.716 "uuid": "04b00896-3135-4f9c-b56b-5b1a5f49527f", 00:09:39.716 "is_configured": false, 00:09:39.716 "data_offset": 0, 00:09:39.716 "data_size": 65536 00:09:39.716 }, 00:09:39.716 { 00:09:39.716 "name": null, 00:09:39.716 "uuid": "78f4501e-e027-4172-862e-04a2aab0f9f6", 00:09:39.716 "is_configured": false, 00:09:39.716 "data_offset": 0, 00:09:39.716 "data_size": 65536 00:09:39.716 }, 00:09:39.716 { 00:09:39.716 "name": "BaseBdev3", 00:09:39.716 "uuid": "a14227ab-0e7b-4260-a0c6-a79f1cec4c2b", 00:09:39.716 "is_configured": true, 00:09:39.716 "data_offset": 0, 00:09:39.716 "data_size": 65536 00:09:39.716 } 00:09:39.716 ] 00:09:39.716 }' 00:09:39.716 15:17:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:39.716 15:17:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.976 15:17:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:39.976 15:17:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.976 15:17:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.976 15:17:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:39.976 15:17:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.234 15:17:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:40.234 15:17:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:40.234 15:17:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.234 15:17:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.234 [2024-11-20 15:17:26.465813] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:40.234 15:17:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.234 15:17:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:40.234 15:17:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:40.234 15:17:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:40.234 15:17:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:40.234 15:17:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:40.234 15:17:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:40.234 15:17:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:40.234 15:17:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:40.234 15:17:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:40.234 15:17:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:40.234 15:17:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:40.234 15:17:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:40.234 15:17:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.234 15:17:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.234 15:17:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.234 15:17:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:40.234 "name": "Existed_Raid", 00:09:40.234 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:40.234 "strip_size_kb": 0, 00:09:40.234 "state": "configuring", 00:09:40.234 "raid_level": "raid1", 00:09:40.234 "superblock": false, 00:09:40.234 "num_base_bdevs": 3, 00:09:40.234 "num_base_bdevs_discovered": 2, 00:09:40.234 "num_base_bdevs_operational": 3, 00:09:40.234 "base_bdevs_list": [ 00:09:40.234 { 00:09:40.234 "name": null, 00:09:40.234 "uuid": "04b00896-3135-4f9c-b56b-5b1a5f49527f", 00:09:40.234 "is_configured": false, 00:09:40.234 "data_offset": 0, 00:09:40.234 "data_size": 65536 00:09:40.234 }, 00:09:40.234 { 00:09:40.234 "name": "BaseBdev2", 00:09:40.234 "uuid": "78f4501e-e027-4172-862e-04a2aab0f9f6", 00:09:40.234 "is_configured": true, 00:09:40.234 "data_offset": 0, 00:09:40.234 "data_size": 65536 00:09:40.234 }, 00:09:40.234 { 00:09:40.234 "name": "BaseBdev3", 00:09:40.234 "uuid": "a14227ab-0e7b-4260-a0c6-a79f1cec4c2b", 00:09:40.234 "is_configured": true, 00:09:40.234 "data_offset": 0, 00:09:40.234 "data_size": 65536 00:09:40.234 } 00:09:40.234 ] 00:09:40.234 }' 00:09:40.234 15:17:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:40.234 15:17:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.494 15:17:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:40.494 15:17:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:40.494 15:17:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.494 15:17:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.494 15:17:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.494 15:17:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:40.494 15:17:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:40.494 15:17:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:40.494 15:17:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.494 15:17:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.494 15:17:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.494 15:17:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 04b00896-3135-4f9c-b56b-5b1a5f49527f 00:09:40.494 15:17:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.494 15:17:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.494 [2024-11-20 15:17:26.966792] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:40.494 [2024-11-20 15:17:26.966842] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:40.494 [2024-11-20 15:17:26.966851] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:09:40.494 [2024-11-20 15:17:26.967127] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:40.494 [2024-11-20 15:17:26.967266] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:40.494 [2024-11-20 15:17:26.967279] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:09:40.494 [2024-11-20 15:17:26.967530] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:40.494 NewBaseBdev 00:09:40.494 15:17:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.494 15:17:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:40.494 15:17:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:09:40.494 15:17:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:40.494 15:17:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:40.494 15:17:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:40.494 15:17:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:40.494 15:17:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:40.494 15:17:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.494 15:17:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.754 15:17:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.754 15:17:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:40.754 15:17:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.754 15:17:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.754 [ 00:09:40.754 { 00:09:40.754 "name": "NewBaseBdev", 00:09:40.754 "aliases": [ 00:09:40.754 "04b00896-3135-4f9c-b56b-5b1a5f49527f" 00:09:40.754 ], 00:09:40.754 "product_name": "Malloc disk", 00:09:40.754 "block_size": 512, 00:09:40.754 "num_blocks": 65536, 00:09:40.754 "uuid": "04b00896-3135-4f9c-b56b-5b1a5f49527f", 00:09:40.754 "assigned_rate_limits": { 00:09:40.754 "rw_ios_per_sec": 0, 00:09:40.754 "rw_mbytes_per_sec": 0, 00:09:40.754 "r_mbytes_per_sec": 0, 00:09:40.754 "w_mbytes_per_sec": 0 00:09:40.754 }, 00:09:40.754 "claimed": true, 00:09:40.754 "claim_type": "exclusive_write", 00:09:40.754 "zoned": false, 00:09:40.754 "supported_io_types": { 00:09:40.754 "read": true, 00:09:40.754 "write": true, 00:09:40.754 "unmap": true, 00:09:40.754 "flush": true, 00:09:40.754 "reset": true, 00:09:40.754 "nvme_admin": false, 00:09:40.754 "nvme_io": false, 00:09:40.754 "nvme_io_md": false, 00:09:40.754 "write_zeroes": true, 00:09:40.754 "zcopy": true, 00:09:40.754 "get_zone_info": false, 00:09:40.754 "zone_management": false, 00:09:40.754 "zone_append": false, 00:09:40.754 "compare": false, 00:09:40.754 "compare_and_write": false, 00:09:40.754 "abort": true, 00:09:40.754 "seek_hole": false, 00:09:40.754 "seek_data": false, 00:09:40.754 "copy": true, 00:09:40.754 "nvme_iov_md": false 00:09:40.754 }, 00:09:40.754 "memory_domains": [ 00:09:40.754 { 00:09:40.754 "dma_device_id": "system", 00:09:40.754 "dma_device_type": 1 00:09:40.754 }, 00:09:40.754 { 00:09:40.754 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:40.754 "dma_device_type": 2 00:09:40.754 } 00:09:40.754 ], 00:09:40.754 "driver_specific": {} 00:09:40.754 } 00:09:40.754 ] 00:09:40.754 15:17:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.754 15:17:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:40.754 15:17:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:09:40.754 15:17:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:40.754 15:17:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:40.754 15:17:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:40.754 15:17:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:40.754 15:17:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:40.754 15:17:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:40.754 15:17:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:40.754 15:17:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:40.754 15:17:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:40.754 15:17:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:40.754 15:17:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:40.754 15:17:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.754 15:17:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.754 15:17:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.754 15:17:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:40.754 "name": "Existed_Raid", 00:09:40.754 "uuid": "15a14ac9-2c53-4454-acb5-375eba6cbdbb", 00:09:40.754 "strip_size_kb": 0, 00:09:40.754 "state": "online", 00:09:40.754 "raid_level": "raid1", 00:09:40.754 "superblock": false, 00:09:40.754 "num_base_bdevs": 3, 00:09:40.754 "num_base_bdevs_discovered": 3, 00:09:40.754 "num_base_bdevs_operational": 3, 00:09:40.754 "base_bdevs_list": [ 00:09:40.754 { 00:09:40.754 "name": "NewBaseBdev", 00:09:40.754 "uuid": "04b00896-3135-4f9c-b56b-5b1a5f49527f", 00:09:40.754 "is_configured": true, 00:09:40.754 "data_offset": 0, 00:09:40.754 "data_size": 65536 00:09:40.754 }, 00:09:40.754 { 00:09:40.754 "name": "BaseBdev2", 00:09:40.754 "uuid": "78f4501e-e027-4172-862e-04a2aab0f9f6", 00:09:40.754 "is_configured": true, 00:09:40.754 "data_offset": 0, 00:09:40.754 "data_size": 65536 00:09:40.754 }, 00:09:40.754 { 00:09:40.754 "name": "BaseBdev3", 00:09:40.754 "uuid": "a14227ab-0e7b-4260-a0c6-a79f1cec4c2b", 00:09:40.754 "is_configured": true, 00:09:40.754 "data_offset": 0, 00:09:40.754 "data_size": 65536 00:09:40.754 } 00:09:40.754 ] 00:09:40.754 }' 00:09:40.754 15:17:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:40.754 15:17:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.013 15:17:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:41.013 15:17:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:41.013 15:17:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:41.013 15:17:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:41.013 15:17:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:41.013 15:17:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:41.013 15:17:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:41.013 15:17:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.013 15:17:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.013 15:17:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:41.013 [2024-11-20 15:17:27.394587] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:41.013 15:17:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.013 15:17:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:41.013 "name": "Existed_Raid", 00:09:41.013 "aliases": [ 00:09:41.013 "15a14ac9-2c53-4454-acb5-375eba6cbdbb" 00:09:41.013 ], 00:09:41.013 "product_name": "Raid Volume", 00:09:41.013 "block_size": 512, 00:09:41.013 "num_blocks": 65536, 00:09:41.013 "uuid": "15a14ac9-2c53-4454-acb5-375eba6cbdbb", 00:09:41.013 "assigned_rate_limits": { 00:09:41.013 "rw_ios_per_sec": 0, 00:09:41.013 "rw_mbytes_per_sec": 0, 00:09:41.013 "r_mbytes_per_sec": 0, 00:09:41.013 "w_mbytes_per_sec": 0 00:09:41.013 }, 00:09:41.013 "claimed": false, 00:09:41.013 "zoned": false, 00:09:41.013 "supported_io_types": { 00:09:41.013 "read": true, 00:09:41.013 "write": true, 00:09:41.013 "unmap": false, 00:09:41.013 "flush": false, 00:09:41.013 "reset": true, 00:09:41.013 "nvme_admin": false, 00:09:41.013 "nvme_io": false, 00:09:41.013 "nvme_io_md": false, 00:09:41.013 "write_zeroes": true, 00:09:41.013 "zcopy": false, 00:09:41.013 "get_zone_info": false, 00:09:41.013 "zone_management": false, 00:09:41.013 "zone_append": false, 00:09:41.013 "compare": false, 00:09:41.013 "compare_and_write": false, 00:09:41.013 "abort": false, 00:09:41.013 "seek_hole": false, 00:09:41.013 "seek_data": false, 00:09:41.013 "copy": false, 00:09:41.013 "nvme_iov_md": false 00:09:41.013 }, 00:09:41.013 "memory_domains": [ 00:09:41.013 { 00:09:41.013 "dma_device_id": "system", 00:09:41.013 "dma_device_type": 1 00:09:41.013 }, 00:09:41.013 { 00:09:41.013 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:41.013 "dma_device_type": 2 00:09:41.013 }, 00:09:41.013 { 00:09:41.013 "dma_device_id": "system", 00:09:41.013 "dma_device_type": 1 00:09:41.013 }, 00:09:41.013 { 00:09:41.013 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:41.013 "dma_device_type": 2 00:09:41.013 }, 00:09:41.013 { 00:09:41.013 "dma_device_id": "system", 00:09:41.013 "dma_device_type": 1 00:09:41.013 }, 00:09:41.013 { 00:09:41.013 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:41.013 "dma_device_type": 2 00:09:41.013 } 00:09:41.013 ], 00:09:41.013 "driver_specific": { 00:09:41.013 "raid": { 00:09:41.013 "uuid": "15a14ac9-2c53-4454-acb5-375eba6cbdbb", 00:09:41.013 "strip_size_kb": 0, 00:09:41.013 "state": "online", 00:09:41.013 "raid_level": "raid1", 00:09:41.013 "superblock": false, 00:09:41.013 "num_base_bdevs": 3, 00:09:41.013 "num_base_bdevs_discovered": 3, 00:09:41.013 "num_base_bdevs_operational": 3, 00:09:41.013 "base_bdevs_list": [ 00:09:41.013 { 00:09:41.013 "name": "NewBaseBdev", 00:09:41.013 "uuid": "04b00896-3135-4f9c-b56b-5b1a5f49527f", 00:09:41.013 "is_configured": true, 00:09:41.013 "data_offset": 0, 00:09:41.013 "data_size": 65536 00:09:41.013 }, 00:09:41.013 { 00:09:41.013 "name": "BaseBdev2", 00:09:41.013 "uuid": "78f4501e-e027-4172-862e-04a2aab0f9f6", 00:09:41.013 "is_configured": true, 00:09:41.013 "data_offset": 0, 00:09:41.013 "data_size": 65536 00:09:41.013 }, 00:09:41.013 { 00:09:41.013 "name": "BaseBdev3", 00:09:41.013 "uuid": "a14227ab-0e7b-4260-a0c6-a79f1cec4c2b", 00:09:41.013 "is_configured": true, 00:09:41.013 "data_offset": 0, 00:09:41.013 "data_size": 65536 00:09:41.013 } 00:09:41.013 ] 00:09:41.013 } 00:09:41.013 } 00:09:41.013 }' 00:09:41.013 15:17:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:41.013 15:17:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:41.013 BaseBdev2 00:09:41.013 BaseBdev3' 00:09:41.013 15:17:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:41.272 15:17:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:41.272 15:17:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:41.272 15:17:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:41.272 15:17:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:41.272 15:17:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.272 15:17:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.272 15:17:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.272 15:17:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:41.273 15:17:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:41.273 15:17:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:41.273 15:17:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:41.273 15:17:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:41.273 15:17:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.273 15:17:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.273 15:17:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.273 15:17:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:41.273 15:17:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:41.273 15:17:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:41.273 15:17:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:41.273 15:17:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.273 15:17:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.273 15:17:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:41.273 15:17:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.273 15:17:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:41.273 15:17:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:41.273 15:17:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:41.273 15:17:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.273 15:17:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.273 [2024-11-20 15:17:27.649941] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:41.273 [2024-11-20 15:17:27.650082] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:41.273 [2024-11-20 15:17:27.650184] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:41.273 [2024-11-20 15:17:27.650463] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:41.273 [2024-11-20 15:17:27.650476] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:09:41.273 15:17:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.273 15:17:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 67258 00:09:41.273 15:17:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 67258 ']' 00:09:41.273 15:17:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 67258 00:09:41.273 15:17:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:09:41.273 15:17:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:41.273 15:17:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67258 00:09:41.273 15:17:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:41.273 15:17:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:41.273 15:17:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67258' 00:09:41.273 killing process with pid 67258 00:09:41.273 15:17:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 67258 00:09:41.273 [2024-11-20 15:17:27.704031] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:41.273 15:17:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 67258 00:09:41.532 [2024-11-20 15:17:28.009546] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:42.908 15:17:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:09:42.908 00:09:42.908 real 0m10.419s 00:09:42.908 user 0m16.495s 00:09:42.908 sys 0m2.091s 00:09:42.908 15:17:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:42.908 ************************************ 00:09:42.908 END TEST raid_state_function_test 00:09:42.908 ************************************ 00:09:42.908 15:17:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.908 15:17:29 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:09:42.908 15:17:29 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:42.908 15:17:29 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:42.908 15:17:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:42.908 ************************************ 00:09:42.908 START TEST raid_state_function_test_sb 00:09:42.908 ************************************ 00:09:42.908 15:17:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 3 true 00:09:42.908 15:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:09:42.908 15:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:09:42.908 15:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:09:42.908 15:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:42.908 15:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:42.908 15:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:42.908 15:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:42.908 15:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:42.908 15:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:42.908 15:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:42.908 15:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:42.908 15:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:42.908 15:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:42.908 15:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:42.908 15:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:42.908 15:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:42.908 15:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:42.908 15:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:42.908 15:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:42.908 15:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:42.908 15:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:42.908 15:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:09:42.908 15:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:09:42.908 15:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:09:42.908 15:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:09:42.908 Process raid pid: 67879 00:09:42.908 15:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=67879 00:09:42.908 15:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:42.908 15:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 67879' 00:09:42.908 15:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 67879 00:09:42.908 15:17:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 67879 ']' 00:09:42.908 15:17:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:42.908 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:42.908 15:17:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:42.908 15:17:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:42.908 15:17:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:42.908 15:17:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.908 [2024-11-20 15:17:29.337779] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:09:42.908 [2024-11-20 15:17:29.337910] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:43.166 [2024-11-20 15:17:29.525936] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:43.166 [2024-11-20 15:17:29.644778] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:43.424 [2024-11-20 15:17:29.850181] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:43.424 [2024-11-20 15:17:29.850345] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:43.991 15:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:43.991 15:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:09:43.991 15:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:43.991 15:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.991 15:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.991 [2024-11-20 15:17:30.183376] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:43.991 [2024-11-20 15:17:30.183436] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:43.991 [2024-11-20 15:17:30.183453] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:43.991 [2024-11-20 15:17:30.183467] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:43.991 [2024-11-20 15:17:30.183475] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:43.991 [2024-11-20 15:17:30.183487] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:43.991 15:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.991 15:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:43.991 15:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:43.991 15:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:43.991 15:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:43.991 15:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:43.991 15:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:43.991 15:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:43.991 15:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:43.991 15:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:43.991 15:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:43.991 15:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:43.991 15:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:43.991 15:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.991 15:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.991 15:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.991 15:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:43.991 "name": "Existed_Raid", 00:09:43.991 "uuid": "2fc5b1eb-d22d-4d4d-9cb2-0dc2798d45e1", 00:09:43.991 "strip_size_kb": 0, 00:09:43.991 "state": "configuring", 00:09:43.991 "raid_level": "raid1", 00:09:43.991 "superblock": true, 00:09:43.991 "num_base_bdevs": 3, 00:09:43.991 "num_base_bdevs_discovered": 0, 00:09:43.991 "num_base_bdevs_operational": 3, 00:09:43.991 "base_bdevs_list": [ 00:09:43.991 { 00:09:43.991 "name": "BaseBdev1", 00:09:43.991 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:43.991 "is_configured": false, 00:09:43.991 "data_offset": 0, 00:09:43.991 "data_size": 0 00:09:43.991 }, 00:09:43.991 { 00:09:43.991 "name": "BaseBdev2", 00:09:43.991 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:43.991 "is_configured": false, 00:09:43.991 "data_offset": 0, 00:09:43.991 "data_size": 0 00:09:43.991 }, 00:09:43.991 { 00:09:43.991 "name": "BaseBdev3", 00:09:43.991 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:43.991 "is_configured": false, 00:09:43.991 "data_offset": 0, 00:09:43.991 "data_size": 0 00:09:43.991 } 00:09:43.991 ] 00:09:43.991 }' 00:09:43.991 15:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:43.991 15:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.252 15:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:44.252 15:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.252 15:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.252 [2024-11-20 15:17:30.611166] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:44.252 [2024-11-20 15:17:30.611205] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:44.252 15:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.252 15:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:44.252 15:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.252 15:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.252 [2024-11-20 15:17:30.623140] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:44.252 [2024-11-20 15:17:30.623322] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:44.252 [2024-11-20 15:17:30.623407] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:44.252 [2024-11-20 15:17:30.623452] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:44.252 [2024-11-20 15:17:30.623692] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:44.252 [2024-11-20 15:17:30.623744] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:44.252 15:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.252 15:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:44.252 15:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.252 15:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.252 [2024-11-20 15:17:30.671468] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:44.252 BaseBdev1 00:09:44.252 15:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.252 15:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:44.252 15:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:44.252 15:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:44.252 15:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:44.252 15:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:44.252 15:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:44.252 15:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:44.252 15:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.252 15:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.252 15:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.252 15:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:44.252 15:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.252 15:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.252 [ 00:09:44.252 { 00:09:44.252 "name": "BaseBdev1", 00:09:44.252 "aliases": [ 00:09:44.252 "9f32d5b5-688c-4007-b191-1b798321fa5c" 00:09:44.252 ], 00:09:44.252 "product_name": "Malloc disk", 00:09:44.252 "block_size": 512, 00:09:44.252 "num_blocks": 65536, 00:09:44.252 "uuid": "9f32d5b5-688c-4007-b191-1b798321fa5c", 00:09:44.252 "assigned_rate_limits": { 00:09:44.252 "rw_ios_per_sec": 0, 00:09:44.252 "rw_mbytes_per_sec": 0, 00:09:44.252 "r_mbytes_per_sec": 0, 00:09:44.252 "w_mbytes_per_sec": 0 00:09:44.252 }, 00:09:44.252 "claimed": true, 00:09:44.252 "claim_type": "exclusive_write", 00:09:44.252 "zoned": false, 00:09:44.252 "supported_io_types": { 00:09:44.252 "read": true, 00:09:44.252 "write": true, 00:09:44.252 "unmap": true, 00:09:44.252 "flush": true, 00:09:44.252 "reset": true, 00:09:44.252 "nvme_admin": false, 00:09:44.252 "nvme_io": false, 00:09:44.252 "nvme_io_md": false, 00:09:44.252 "write_zeroes": true, 00:09:44.252 "zcopy": true, 00:09:44.252 "get_zone_info": false, 00:09:44.252 "zone_management": false, 00:09:44.252 "zone_append": false, 00:09:44.252 "compare": false, 00:09:44.252 "compare_and_write": false, 00:09:44.252 "abort": true, 00:09:44.252 "seek_hole": false, 00:09:44.252 "seek_data": false, 00:09:44.252 "copy": true, 00:09:44.252 "nvme_iov_md": false 00:09:44.252 }, 00:09:44.252 "memory_domains": [ 00:09:44.252 { 00:09:44.252 "dma_device_id": "system", 00:09:44.252 "dma_device_type": 1 00:09:44.252 }, 00:09:44.252 { 00:09:44.253 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:44.253 "dma_device_type": 2 00:09:44.253 } 00:09:44.253 ], 00:09:44.253 "driver_specific": {} 00:09:44.253 } 00:09:44.253 ] 00:09:44.253 15:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.253 15:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:44.253 15:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:44.253 15:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:44.253 15:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:44.253 15:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:44.253 15:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:44.253 15:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:44.253 15:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:44.253 15:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:44.253 15:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:44.253 15:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:44.253 15:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:44.253 15:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.253 15:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:44.253 15:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.512 15:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.512 15:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:44.512 "name": "Existed_Raid", 00:09:44.512 "uuid": "c0b9edf8-078d-4161-add9-88ed0e482da1", 00:09:44.512 "strip_size_kb": 0, 00:09:44.512 "state": "configuring", 00:09:44.512 "raid_level": "raid1", 00:09:44.512 "superblock": true, 00:09:44.512 "num_base_bdevs": 3, 00:09:44.512 "num_base_bdevs_discovered": 1, 00:09:44.512 "num_base_bdevs_operational": 3, 00:09:44.512 "base_bdevs_list": [ 00:09:44.512 { 00:09:44.512 "name": "BaseBdev1", 00:09:44.512 "uuid": "9f32d5b5-688c-4007-b191-1b798321fa5c", 00:09:44.512 "is_configured": true, 00:09:44.512 "data_offset": 2048, 00:09:44.512 "data_size": 63488 00:09:44.512 }, 00:09:44.512 { 00:09:44.512 "name": "BaseBdev2", 00:09:44.512 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:44.512 "is_configured": false, 00:09:44.512 "data_offset": 0, 00:09:44.512 "data_size": 0 00:09:44.512 }, 00:09:44.512 { 00:09:44.512 "name": "BaseBdev3", 00:09:44.512 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:44.512 "is_configured": false, 00:09:44.512 "data_offset": 0, 00:09:44.512 "data_size": 0 00:09:44.512 } 00:09:44.512 ] 00:09:44.512 }' 00:09:44.512 15:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:44.512 15:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.771 15:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:44.771 15:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.771 15:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.771 [2024-11-20 15:17:31.147121] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:44.771 [2024-11-20 15:17:31.147178] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:44.771 15:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.771 15:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:44.771 15:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.771 15:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.771 [2024-11-20 15:17:31.159167] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:44.771 [2024-11-20 15:17:31.161441] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:44.771 [2024-11-20 15:17:31.161603] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:44.771 [2024-11-20 15:17:31.161746] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:44.771 [2024-11-20 15:17:31.161801] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:44.771 15:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.771 15:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:44.771 15:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:44.771 15:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:44.771 15:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:44.771 15:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:44.771 15:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:44.771 15:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:44.771 15:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:44.771 15:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:44.771 15:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:44.771 15:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:44.771 15:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:44.771 15:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:44.771 15:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:44.771 15:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.771 15:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.771 15:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.771 15:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:44.771 "name": "Existed_Raid", 00:09:44.771 "uuid": "46acf5af-dd9a-4389-8004-59e76ae8ca71", 00:09:44.771 "strip_size_kb": 0, 00:09:44.771 "state": "configuring", 00:09:44.771 "raid_level": "raid1", 00:09:44.771 "superblock": true, 00:09:44.771 "num_base_bdevs": 3, 00:09:44.771 "num_base_bdevs_discovered": 1, 00:09:44.771 "num_base_bdevs_operational": 3, 00:09:44.771 "base_bdevs_list": [ 00:09:44.771 { 00:09:44.771 "name": "BaseBdev1", 00:09:44.771 "uuid": "9f32d5b5-688c-4007-b191-1b798321fa5c", 00:09:44.771 "is_configured": true, 00:09:44.771 "data_offset": 2048, 00:09:44.771 "data_size": 63488 00:09:44.771 }, 00:09:44.771 { 00:09:44.771 "name": "BaseBdev2", 00:09:44.771 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:44.771 "is_configured": false, 00:09:44.771 "data_offset": 0, 00:09:44.771 "data_size": 0 00:09:44.771 }, 00:09:44.771 { 00:09:44.771 "name": "BaseBdev3", 00:09:44.771 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:44.771 "is_configured": false, 00:09:44.771 "data_offset": 0, 00:09:44.771 "data_size": 0 00:09:44.771 } 00:09:44.771 ] 00:09:44.771 }' 00:09:44.771 15:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:44.771 15:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.337 15:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:45.337 15:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.337 15:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.337 [2024-11-20 15:17:31.613684] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:45.337 BaseBdev2 00:09:45.337 15:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.337 15:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:45.337 15:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:45.337 15:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:45.337 15:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:45.337 15:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:45.337 15:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:45.337 15:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:45.337 15:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.337 15:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.337 15:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.337 15:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:45.337 15:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.337 15:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.337 [ 00:09:45.337 { 00:09:45.337 "name": "BaseBdev2", 00:09:45.337 "aliases": [ 00:09:45.337 "b114b1e4-deb5-470e-b134-05211ead7ab7" 00:09:45.337 ], 00:09:45.337 "product_name": "Malloc disk", 00:09:45.337 "block_size": 512, 00:09:45.337 "num_blocks": 65536, 00:09:45.337 "uuid": "b114b1e4-deb5-470e-b134-05211ead7ab7", 00:09:45.337 "assigned_rate_limits": { 00:09:45.338 "rw_ios_per_sec": 0, 00:09:45.338 "rw_mbytes_per_sec": 0, 00:09:45.338 "r_mbytes_per_sec": 0, 00:09:45.338 "w_mbytes_per_sec": 0 00:09:45.338 }, 00:09:45.338 "claimed": true, 00:09:45.338 "claim_type": "exclusive_write", 00:09:45.338 "zoned": false, 00:09:45.338 "supported_io_types": { 00:09:45.338 "read": true, 00:09:45.338 "write": true, 00:09:45.338 "unmap": true, 00:09:45.338 "flush": true, 00:09:45.338 "reset": true, 00:09:45.338 "nvme_admin": false, 00:09:45.338 "nvme_io": false, 00:09:45.338 "nvme_io_md": false, 00:09:45.338 "write_zeroes": true, 00:09:45.338 "zcopy": true, 00:09:45.338 "get_zone_info": false, 00:09:45.338 "zone_management": false, 00:09:45.338 "zone_append": false, 00:09:45.338 "compare": false, 00:09:45.338 "compare_and_write": false, 00:09:45.338 "abort": true, 00:09:45.338 "seek_hole": false, 00:09:45.338 "seek_data": false, 00:09:45.338 "copy": true, 00:09:45.338 "nvme_iov_md": false 00:09:45.338 }, 00:09:45.338 "memory_domains": [ 00:09:45.338 { 00:09:45.338 "dma_device_id": "system", 00:09:45.338 "dma_device_type": 1 00:09:45.338 }, 00:09:45.338 { 00:09:45.338 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:45.338 "dma_device_type": 2 00:09:45.338 } 00:09:45.338 ], 00:09:45.338 "driver_specific": {} 00:09:45.338 } 00:09:45.338 ] 00:09:45.338 15:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.338 15:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:45.338 15:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:45.338 15:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:45.338 15:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:45.338 15:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:45.338 15:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:45.338 15:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:45.338 15:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:45.338 15:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:45.338 15:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:45.338 15:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:45.338 15:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:45.338 15:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:45.338 15:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:45.338 15:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:45.338 15:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.338 15:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.338 15:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.338 15:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:45.338 "name": "Existed_Raid", 00:09:45.338 "uuid": "46acf5af-dd9a-4389-8004-59e76ae8ca71", 00:09:45.338 "strip_size_kb": 0, 00:09:45.338 "state": "configuring", 00:09:45.338 "raid_level": "raid1", 00:09:45.338 "superblock": true, 00:09:45.338 "num_base_bdevs": 3, 00:09:45.338 "num_base_bdevs_discovered": 2, 00:09:45.338 "num_base_bdevs_operational": 3, 00:09:45.338 "base_bdevs_list": [ 00:09:45.338 { 00:09:45.338 "name": "BaseBdev1", 00:09:45.338 "uuid": "9f32d5b5-688c-4007-b191-1b798321fa5c", 00:09:45.338 "is_configured": true, 00:09:45.338 "data_offset": 2048, 00:09:45.338 "data_size": 63488 00:09:45.338 }, 00:09:45.338 { 00:09:45.338 "name": "BaseBdev2", 00:09:45.338 "uuid": "b114b1e4-deb5-470e-b134-05211ead7ab7", 00:09:45.338 "is_configured": true, 00:09:45.338 "data_offset": 2048, 00:09:45.338 "data_size": 63488 00:09:45.338 }, 00:09:45.338 { 00:09:45.338 "name": "BaseBdev3", 00:09:45.338 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:45.338 "is_configured": false, 00:09:45.338 "data_offset": 0, 00:09:45.338 "data_size": 0 00:09:45.338 } 00:09:45.338 ] 00:09:45.338 }' 00:09:45.338 15:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:45.338 15:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.904 15:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:45.904 15:17:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.904 15:17:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.904 [2024-11-20 15:17:32.148740] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:45.904 [2024-11-20 15:17:32.148996] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:45.904 [2024-11-20 15:17:32.149018] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:45.904 [2024-11-20 15:17:32.149296] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:45.904 [2024-11-20 15:17:32.149435] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:45.904 [2024-11-20 15:17:32.149451] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:45.904 [2024-11-20 15:17:32.149593] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:45.904 BaseBdev3 00:09:45.904 15:17:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.904 15:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:45.904 15:17:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:45.904 15:17:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:45.904 15:17:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:45.904 15:17:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:45.904 15:17:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:45.904 15:17:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:45.904 15:17:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.904 15:17:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.904 15:17:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.904 15:17:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:45.904 15:17:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.904 15:17:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.904 [ 00:09:45.904 { 00:09:45.904 "name": "BaseBdev3", 00:09:45.904 "aliases": [ 00:09:45.904 "69d491ac-5df9-4642-8a29-ceaa0831bbb2" 00:09:45.904 ], 00:09:45.904 "product_name": "Malloc disk", 00:09:45.904 "block_size": 512, 00:09:45.904 "num_blocks": 65536, 00:09:45.904 "uuid": "69d491ac-5df9-4642-8a29-ceaa0831bbb2", 00:09:45.904 "assigned_rate_limits": { 00:09:45.904 "rw_ios_per_sec": 0, 00:09:45.904 "rw_mbytes_per_sec": 0, 00:09:45.904 "r_mbytes_per_sec": 0, 00:09:45.904 "w_mbytes_per_sec": 0 00:09:45.904 }, 00:09:45.904 "claimed": true, 00:09:45.904 "claim_type": "exclusive_write", 00:09:45.904 "zoned": false, 00:09:45.904 "supported_io_types": { 00:09:45.904 "read": true, 00:09:45.904 "write": true, 00:09:45.904 "unmap": true, 00:09:45.904 "flush": true, 00:09:45.904 "reset": true, 00:09:45.904 "nvme_admin": false, 00:09:45.904 "nvme_io": false, 00:09:45.904 "nvme_io_md": false, 00:09:45.904 "write_zeroes": true, 00:09:45.904 "zcopy": true, 00:09:45.904 "get_zone_info": false, 00:09:45.904 "zone_management": false, 00:09:45.904 "zone_append": false, 00:09:45.904 "compare": false, 00:09:45.904 "compare_and_write": false, 00:09:45.904 "abort": true, 00:09:45.904 "seek_hole": false, 00:09:45.904 "seek_data": false, 00:09:45.904 "copy": true, 00:09:45.904 "nvme_iov_md": false 00:09:45.904 }, 00:09:45.904 "memory_domains": [ 00:09:45.904 { 00:09:45.904 "dma_device_id": "system", 00:09:45.904 "dma_device_type": 1 00:09:45.904 }, 00:09:45.904 { 00:09:45.904 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:45.904 "dma_device_type": 2 00:09:45.904 } 00:09:45.904 ], 00:09:45.904 "driver_specific": {} 00:09:45.904 } 00:09:45.904 ] 00:09:45.904 15:17:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.904 15:17:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:45.904 15:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:45.904 15:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:45.904 15:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:09:45.904 15:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:45.904 15:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:45.904 15:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:45.904 15:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:45.904 15:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:45.904 15:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:45.904 15:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:45.904 15:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:45.904 15:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:45.904 15:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:45.904 15:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:45.904 15:17:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.904 15:17:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.904 15:17:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.904 15:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:45.904 "name": "Existed_Raid", 00:09:45.904 "uuid": "46acf5af-dd9a-4389-8004-59e76ae8ca71", 00:09:45.904 "strip_size_kb": 0, 00:09:45.904 "state": "online", 00:09:45.904 "raid_level": "raid1", 00:09:45.904 "superblock": true, 00:09:45.904 "num_base_bdevs": 3, 00:09:45.904 "num_base_bdevs_discovered": 3, 00:09:45.904 "num_base_bdevs_operational": 3, 00:09:45.904 "base_bdevs_list": [ 00:09:45.904 { 00:09:45.904 "name": "BaseBdev1", 00:09:45.904 "uuid": "9f32d5b5-688c-4007-b191-1b798321fa5c", 00:09:45.904 "is_configured": true, 00:09:45.904 "data_offset": 2048, 00:09:45.904 "data_size": 63488 00:09:45.904 }, 00:09:45.904 { 00:09:45.904 "name": "BaseBdev2", 00:09:45.904 "uuid": "b114b1e4-deb5-470e-b134-05211ead7ab7", 00:09:45.904 "is_configured": true, 00:09:45.904 "data_offset": 2048, 00:09:45.904 "data_size": 63488 00:09:45.904 }, 00:09:45.904 { 00:09:45.904 "name": "BaseBdev3", 00:09:45.904 "uuid": "69d491ac-5df9-4642-8a29-ceaa0831bbb2", 00:09:45.904 "is_configured": true, 00:09:45.904 "data_offset": 2048, 00:09:45.904 "data_size": 63488 00:09:45.904 } 00:09:45.904 ] 00:09:45.904 }' 00:09:45.904 15:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:45.904 15:17:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.163 15:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:46.163 15:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:46.163 15:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:46.163 15:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:46.163 15:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:46.163 15:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:46.163 15:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:46.163 15:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:46.163 15:17:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.163 15:17:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.163 [2024-11-20 15:17:32.604440] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:46.163 15:17:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.163 15:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:46.163 "name": "Existed_Raid", 00:09:46.163 "aliases": [ 00:09:46.163 "46acf5af-dd9a-4389-8004-59e76ae8ca71" 00:09:46.163 ], 00:09:46.163 "product_name": "Raid Volume", 00:09:46.163 "block_size": 512, 00:09:46.163 "num_blocks": 63488, 00:09:46.163 "uuid": "46acf5af-dd9a-4389-8004-59e76ae8ca71", 00:09:46.163 "assigned_rate_limits": { 00:09:46.163 "rw_ios_per_sec": 0, 00:09:46.163 "rw_mbytes_per_sec": 0, 00:09:46.163 "r_mbytes_per_sec": 0, 00:09:46.163 "w_mbytes_per_sec": 0 00:09:46.163 }, 00:09:46.163 "claimed": false, 00:09:46.163 "zoned": false, 00:09:46.163 "supported_io_types": { 00:09:46.163 "read": true, 00:09:46.163 "write": true, 00:09:46.163 "unmap": false, 00:09:46.163 "flush": false, 00:09:46.163 "reset": true, 00:09:46.163 "nvme_admin": false, 00:09:46.163 "nvme_io": false, 00:09:46.163 "nvme_io_md": false, 00:09:46.163 "write_zeroes": true, 00:09:46.163 "zcopy": false, 00:09:46.163 "get_zone_info": false, 00:09:46.163 "zone_management": false, 00:09:46.163 "zone_append": false, 00:09:46.163 "compare": false, 00:09:46.163 "compare_and_write": false, 00:09:46.163 "abort": false, 00:09:46.163 "seek_hole": false, 00:09:46.163 "seek_data": false, 00:09:46.163 "copy": false, 00:09:46.163 "nvme_iov_md": false 00:09:46.163 }, 00:09:46.163 "memory_domains": [ 00:09:46.163 { 00:09:46.163 "dma_device_id": "system", 00:09:46.163 "dma_device_type": 1 00:09:46.163 }, 00:09:46.163 { 00:09:46.163 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:46.163 "dma_device_type": 2 00:09:46.163 }, 00:09:46.163 { 00:09:46.163 "dma_device_id": "system", 00:09:46.163 "dma_device_type": 1 00:09:46.163 }, 00:09:46.163 { 00:09:46.164 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:46.164 "dma_device_type": 2 00:09:46.164 }, 00:09:46.164 { 00:09:46.164 "dma_device_id": "system", 00:09:46.164 "dma_device_type": 1 00:09:46.164 }, 00:09:46.164 { 00:09:46.164 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:46.164 "dma_device_type": 2 00:09:46.164 } 00:09:46.164 ], 00:09:46.164 "driver_specific": { 00:09:46.164 "raid": { 00:09:46.164 "uuid": "46acf5af-dd9a-4389-8004-59e76ae8ca71", 00:09:46.164 "strip_size_kb": 0, 00:09:46.164 "state": "online", 00:09:46.164 "raid_level": "raid1", 00:09:46.164 "superblock": true, 00:09:46.164 "num_base_bdevs": 3, 00:09:46.164 "num_base_bdevs_discovered": 3, 00:09:46.164 "num_base_bdevs_operational": 3, 00:09:46.164 "base_bdevs_list": [ 00:09:46.164 { 00:09:46.164 "name": "BaseBdev1", 00:09:46.164 "uuid": "9f32d5b5-688c-4007-b191-1b798321fa5c", 00:09:46.164 "is_configured": true, 00:09:46.164 "data_offset": 2048, 00:09:46.164 "data_size": 63488 00:09:46.164 }, 00:09:46.164 { 00:09:46.164 "name": "BaseBdev2", 00:09:46.164 "uuid": "b114b1e4-deb5-470e-b134-05211ead7ab7", 00:09:46.164 "is_configured": true, 00:09:46.164 "data_offset": 2048, 00:09:46.164 "data_size": 63488 00:09:46.164 }, 00:09:46.164 { 00:09:46.164 "name": "BaseBdev3", 00:09:46.164 "uuid": "69d491ac-5df9-4642-8a29-ceaa0831bbb2", 00:09:46.164 "is_configured": true, 00:09:46.164 "data_offset": 2048, 00:09:46.164 "data_size": 63488 00:09:46.164 } 00:09:46.164 ] 00:09:46.164 } 00:09:46.164 } 00:09:46.164 }' 00:09:46.164 15:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:46.422 15:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:46.422 BaseBdev2 00:09:46.422 BaseBdev3' 00:09:46.422 15:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:46.422 15:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:46.422 15:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:46.422 15:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:46.422 15:17:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.422 15:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:46.422 15:17:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.422 15:17:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.422 15:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:46.422 15:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:46.422 15:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:46.422 15:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:46.422 15:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:46.422 15:17:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.422 15:17:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.422 15:17:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.422 15:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:46.422 15:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:46.422 15:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:46.422 15:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:46.422 15:17:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.422 15:17:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.422 15:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:46.422 15:17:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.422 15:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:46.422 15:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:46.422 15:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:46.422 15:17:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.422 15:17:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.422 [2024-11-20 15:17:32.859834] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:46.697 15:17:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.697 15:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:46.697 15:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:09:46.697 15:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:46.697 15:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:09:46.697 15:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:09:46.697 15:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:09:46.697 15:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:46.697 15:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:46.697 15:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:46.697 15:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:46.697 15:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:46.697 15:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:46.697 15:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:46.697 15:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:46.697 15:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:46.697 15:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:46.697 15:17:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.697 15:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:46.697 15:17:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.697 15:17:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.697 15:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:46.697 "name": "Existed_Raid", 00:09:46.697 "uuid": "46acf5af-dd9a-4389-8004-59e76ae8ca71", 00:09:46.697 "strip_size_kb": 0, 00:09:46.697 "state": "online", 00:09:46.697 "raid_level": "raid1", 00:09:46.697 "superblock": true, 00:09:46.697 "num_base_bdevs": 3, 00:09:46.697 "num_base_bdevs_discovered": 2, 00:09:46.697 "num_base_bdevs_operational": 2, 00:09:46.697 "base_bdevs_list": [ 00:09:46.697 { 00:09:46.697 "name": null, 00:09:46.697 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:46.697 "is_configured": false, 00:09:46.697 "data_offset": 0, 00:09:46.697 "data_size": 63488 00:09:46.697 }, 00:09:46.697 { 00:09:46.697 "name": "BaseBdev2", 00:09:46.697 "uuid": "b114b1e4-deb5-470e-b134-05211ead7ab7", 00:09:46.697 "is_configured": true, 00:09:46.697 "data_offset": 2048, 00:09:46.697 "data_size": 63488 00:09:46.697 }, 00:09:46.697 { 00:09:46.697 "name": "BaseBdev3", 00:09:46.697 "uuid": "69d491ac-5df9-4642-8a29-ceaa0831bbb2", 00:09:46.697 "is_configured": true, 00:09:46.697 "data_offset": 2048, 00:09:46.697 "data_size": 63488 00:09:46.697 } 00:09:46.697 ] 00:09:46.697 }' 00:09:46.697 15:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:46.697 15:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.988 15:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:46.988 15:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:46.988 15:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:46.988 15:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:46.988 15:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.988 15:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.988 15:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.988 15:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:46.988 15:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:46.988 15:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:46.988 15:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.988 15:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.988 [2024-11-20 15:17:33.432834] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:47.247 15:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.247 15:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:47.247 15:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:47.247 15:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:47.247 15:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.247 15:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:47.247 15:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.247 15:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.247 15:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:47.247 15:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:47.247 15:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:47.247 15:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.247 15:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.247 [2024-11-20 15:17:33.579630] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:47.247 [2024-11-20 15:17:33.579750] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:47.247 [2024-11-20 15:17:33.677460] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:47.247 [2024-11-20 15:17:33.677518] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:47.248 [2024-11-20 15:17:33.677533] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:47.248 15:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.248 15:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:47.248 15:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:47.248 15:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:47.248 15:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:47.248 15:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.248 15:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.248 15:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.248 15:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:47.248 15:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:47.248 15:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:09:47.248 15:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:47.248 15:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:47.248 15:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:47.248 15:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.248 15:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.508 BaseBdev2 00:09:47.508 15:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.508 15:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:47.508 15:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:47.508 15:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:47.508 15:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:47.508 15:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:47.508 15:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:47.508 15:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:47.508 15:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.508 15:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.508 15:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.508 15:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:47.508 15:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.508 15:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.508 [ 00:09:47.508 { 00:09:47.508 "name": "BaseBdev2", 00:09:47.508 "aliases": [ 00:09:47.508 "893e3f13-5953-4898-a823-eb9a7579b38c" 00:09:47.508 ], 00:09:47.508 "product_name": "Malloc disk", 00:09:47.508 "block_size": 512, 00:09:47.508 "num_blocks": 65536, 00:09:47.508 "uuid": "893e3f13-5953-4898-a823-eb9a7579b38c", 00:09:47.508 "assigned_rate_limits": { 00:09:47.508 "rw_ios_per_sec": 0, 00:09:47.508 "rw_mbytes_per_sec": 0, 00:09:47.508 "r_mbytes_per_sec": 0, 00:09:47.508 "w_mbytes_per_sec": 0 00:09:47.508 }, 00:09:47.508 "claimed": false, 00:09:47.508 "zoned": false, 00:09:47.508 "supported_io_types": { 00:09:47.508 "read": true, 00:09:47.508 "write": true, 00:09:47.508 "unmap": true, 00:09:47.508 "flush": true, 00:09:47.508 "reset": true, 00:09:47.508 "nvme_admin": false, 00:09:47.508 "nvme_io": false, 00:09:47.508 "nvme_io_md": false, 00:09:47.508 "write_zeroes": true, 00:09:47.508 "zcopy": true, 00:09:47.508 "get_zone_info": false, 00:09:47.508 "zone_management": false, 00:09:47.508 "zone_append": false, 00:09:47.508 "compare": false, 00:09:47.508 "compare_and_write": false, 00:09:47.508 "abort": true, 00:09:47.508 "seek_hole": false, 00:09:47.508 "seek_data": false, 00:09:47.508 "copy": true, 00:09:47.508 "nvme_iov_md": false 00:09:47.508 }, 00:09:47.508 "memory_domains": [ 00:09:47.508 { 00:09:47.508 "dma_device_id": "system", 00:09:47.508 "dma_device_type": 1 00:09:47.508 }, 00:09:47.508 { 00:09:47.508 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:47.508 "dma_device_type": 2 00:09:47.508 } 00:09:47.508 ], 00:09:47.508 "driver_specific": {} 00:09:47.508 } 00:09:47.508 ] 00:09:47.508 15:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.508 15:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:47.508 15:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:47.508 15:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:47.508 15:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:47.508 15:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.508 15:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.508 BaseBdev3 00:09:47.508 15:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.508 15:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:47.508 15:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:47.508 15:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:47.508 15:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:47.508 15:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:47.508 15:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:47.508 15:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:47.508 15:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.508 15:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.508 15:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.508 15:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:47.508 15:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.508 15:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.508 [ 00:09:47.508 { 00:09:47.508 "name": "BaseBdev3", 00:09:47.508 "aliases": [ 00:09:47.508 "d88a41bb-5625-45d0-96cf-1cfe3d055925" 00:09:47.508 ], 00:09:47.508 "product_name": "Malloc disk", 00:09:47.508 "block_size": 512, 00:09:47.508 "num_blocks": 65536, 00:09:47.508 "uuid": "d88a41bb-5625-45d0-96cf-1cfe3d055925", 00:09:47.508 "assigned_rate_limits": { 00:09:47.508 "rw_ios_per_sec": 0, 00:09:47.508 "rw_mbytes_per_sec": 0, 00:09:47.508 "r_mbytes_per_sec": 0, 00:09:47.508 "w_mbytes_per_sec": 0 00:09:47.508 }, 00:09:47.508 "claimed": false, 00:09:47.508 "zoned": false, 00:09:47.508 "supported_io_types": { 00:09:47.508 "read": true, 00:09:47.508 "write": true, 00:09:47.508 "unmap": true, 00:09:47.508 "flush": true, 00:09:47.508 "reset": true, 00:09:47.508 "nvme_admin": false, 00:09:47.508 "nvme_io": false, 00:09:47.508 "nvme_io_md": false, 00:09:47.508 "write_zeroes": true, 00:09:47.508 "zcopy": true, 00:09:47.508 "get_zone_info": false, 00:09:47.508 "zone_management": false, 00:09:47.508 "zone_append": false, 00:09:47.508 "compare": false, 00:09:47.508 "compare_and_write": false, 00:09:47.508 "abort": true, 00:09:47.508 "seek_hole": false, 00:09:47.508 "seek_data": false, 00:09:47.508 "copy": true, 00:09:47.508 "nvme_iov_md": false 00:09:47.508 }, 00:09:47.508 "memory_domains": [ 00:09:47.508 { 00:09:47.508 "dma_device_id": "system", 00:09:47.508 "dma_device_type": 1 00:09:47.508 }, 00:09:47.508 { 00:09:47.508 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:47.508 "dma_device_type": 2 00:09:47.508 } 00:09:47.508 ], 00:09:47.508 "driver_specific": {} 00:09:47.508 } 00:09:47.508 ] 00:09:47.508 15:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.508 15:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:47.508 15:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:47.508 15:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:47.508 15:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:47.508 15:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.508 15:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.508 [2024-11-20 15:17:33.916022] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:47.508 [2024-11-20 15:17:33.916191] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:47.508 [2024-11-20 15:17:33.916279] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:47.508 [2024-11-20 15:17:33.918483] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:47.508 15:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.508 15:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:47.508 15:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:47.509 15:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:47.509 15:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:47.509 15:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:47.509 15:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:47.509 15:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:47.509 15:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:47.509 15:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:47.509 15:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:47.509 15:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:47.509 15:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.509 15:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.509 15:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:47.509 15:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.509 15:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:47.509 "name": "Existed_Raid", 00:09:47.509 "uuid": "9fc0050d-c877-4afe-b950-c5dd3528f541", 00:09:47.509 "strip_size_kb": 0, 00:09:47.509 "state": "configuring", 00:09:47.509 "raid_level": "raid1", 00:09:47.509 "superblock": true, 00:09:47.509 "num_base_bdevs": 3, 00:09:47.509 "num_base_bdevs_discovered": 2, 00:09:47.509 "num_base_bdevs_operational": 3, 00:09:47.509 "base_bdevs_list": [ 00:09:47.509 { 00:09:47.509 "name": "BaseBdev1", 00:09:47.509 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:47.509 "is_configured": false, 00:09:47.509 "data_offset": 0, 00:09:47.509 "data_size": 0 00:09:47.509 }, 00:09:47.509 { 00:09:47.509 "name": "BaseBdev2", 00:09:47.509 "uuid": "893e3f13-5953-4898-a823-eb9a7579b38c", 00:09:47.509 "is_configured": true, 00:09:47.509 "data_offset": 2048, 00:09:47.509 "data_size": 63488 00:09:47.509 }, 00:09:47.509 { 00:09:47.509 "name": "BaseBdev3", 00:09:47.509 "uuid": "d88a41bb-5625-45d0-96cf-1cfe3d055925", 00:09:47.509 "is_configured": true, 00:09:47.509 "data_offset": 2048, 00:09:47.509 "data_size": 63488 00:09:47.509 } 00:09:47.509 ] 00:09:47.509 }' 00:09:47.509 15:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:47.509 15:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:48.078 15:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:48.078 15:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.078 15:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:48.078 [2024-11-20 15:17:34.351522] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:48.078 15:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.078 15:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:48.078 15:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:48.078 15:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:48.078 15:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:48.078 15:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:48.078 15:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:48.078 15:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:48.078 15:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:48.078 15:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:48.078 15:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:48.078 15:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:48.078 15:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.078 15:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:48.078 15:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:48.078 15:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.078 15:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:48.078 "name": "Existed_Raid", 00:09:48.078 "uuid": "9fc0050d-c877-4afe-b950-c5dd3528f541", 00:09:48.078 "strip_size_kb": 0, 00:09:48.078 "state": "configuring", 00:09:48.078 "raid_level": "raid1", 00:09:48.078 "superblock": true, 00:09:48.078 "num_base_bdevs": 3, 00:09:48.078 "num_base_bdevs_discovered": 1, 00:09:48.078 "num_base_bdevs_operational": 3, 00:09:48.078 "base_bdevs_list": [ 00:09:48.078 { 00:09:48.078 "name": "BaseBdev1", 00:09:48.078 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:48.078 "is_configured": false, 00:09:48.078 "data_offset": 0, 00:09:48.078 "data_size": 0 00:09:48.078 }, 00:09:48.078 { 00:09:48.078 "name": null, 00:09:48.078 "uuid": "893e3f13-5953-4898-a823-eb9a7579b38c", 00:09:48.078 "is_configured": false, 00:09:48.078 "data_offset": 0, 00:09:48.078 "data_size": 63488 00:09:48.078 }, 00:09:48.078 { 00:09:48.078 "name": "BaseBdev3", 00:09:48.078 "uuid": "d88a41bb-5625-45d0-96cf-1cfe3d055925", 00:09:48.078 "is_configured": true, 00:09:48.078 "data_offset": 2048, 00:09:48.078 "data_size": 63488 00:09:48.078 } 00:09:48.078 ] 00:09:48.078 }' 00:09:48.078 15:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:48.078 15:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:48.338 15:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:48.338 15:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.338 15:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:48.338 15:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:48.338 15:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.338 15:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:48.338 15:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:48.338 15:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.338 15:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:48.598 [2024-11-20 15:17:34.845610] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:48.598 BaseBdev1 00:09:48.598 15:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.598 15:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:48.598 15:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:48.598 15:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:48.599 15:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:48.599 15:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:48.599 15:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:48.599 15:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:48.599 15:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.599 15:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:48.599 15:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.599 15:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:48.599 15:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.599 15:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:48.599 [ 00:09:48.599 { 00:09:48.599 "name": "BaseBdev1", 00:09:48.599 "aliases": [ 00:09:48.599 "f7127189-db40-4b9d-a2bd-0dbd046c8e82" 00:09:48.599 ], 00:09:48.599 "product_name": "Malloc disk", 00:09:48.599 "block_size": 512, 00:09:48.599 "num_blocks": 65536, 00:09:48.599 "uuid": "f7127189-db40-4b9d-a2bd-0dbd046c8e82", 00:09:48.599 "assigned_rate_limits": { 00:09:48.599 "rw_ios_per_sec": 0, 00:09:48.599 "rw_mbytes_per_sec": 0, 00:09:48.599 "r_mbytes_per_sec": 0, 00:09:48.599 "w_mbytes_per_sec": 0 00:09:48.599 }, 00:09:48.599 "claimed": true, 00:09:48.599 "claim_type": "exclusive_write", 00:09:48.599 "zoned": false, 00:09:48.599 "supported_io_types": { 00:09:48.599 "read": true, 00:09:48.599 "write": true, 00:09:48.599 "unmap": true, 00:09:48.599 "flush": true, 00:09:48.599 "reset": true, 00:09:48.599 "nvme_admin": false, 00:09:48.599 "nvme_io": false, 00:09:48.599 "nvme_io_md": false, 00:09:48.599 "write_zeroes": true, 00:09:48.599 "zcopy": true, 00:09:48.599 "get_zone_info": false, 00:09:48.599 "zone_management": false, 00:09:48.599 "zone_append": false, 00:09:48.599 "compare": false, 00:09:48.599 "compare_and_write": false, 00:09:48.599 "abort": true, 00:09:48.599 "seek_hole": false, 00:09:48.599 "seek_data": false, 00:09:48.599 "copy": true, 00:09:48.599 "nvme_iov_md": false 00:09:48.599 }, 00:09:48.599 "memory_domains": [ 00:09:48.599 { 00:09:48.599 "dma_device_id": "system", 00:09:48.599 "dma_device_type": 1 00:09:48.599 }, 00:09:48.599 { 00:09:48.599 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:48.599 "dma_device_type": 2 00:09:48.599 } 00:09:48.599 ], 00:09:48.599 "driver_specific": {} 00:09:48.599 } 00:09:48.599 ] 00:09:48.599 15:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.599 15:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:48.599 15:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:48.599 15:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:48.599 15:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:48.599 15:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:48.599 15:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:48.599 15:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:48.599 15:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:48.599 15:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:48.599 15:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:48.599 15:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:48.599 15:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:48.599 15:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.599 15:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:48.599 15:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:48.599 15:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.599 15:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:48.599 "name": "Existed_Raid", 00:09:48.599 "uuid": "9fc0050d-c877-4afe-b950-c5dd3528f541", 00:09:48.599 "strip_size_kb": 0, 00:09:48.599 "state": "configuring", 00:09:48.599 "raid_level": "raid1", 00:09:48.599 "superblock": true, 00:09:48.599 "num_base_bdevs": 3, 00:09:48.599 "num_base_bdevs_discovered": 2, 00:09:48.599 "num_base_bdevs_operational": 3, 00:09:48.599 "base_bdevs_list": [ 00:09:48.599 { 00:09:48.599 "name": "BaseBdev1", 00:09:48.599 "uuid": "f7127189-db40-4b9d-a2bd-0dbd046c8e82", 00:09:48.599 "is_configured": true, 00:09:48.599 "data_offset": 2048, 00:09:48.599 "data_size": 63488 00:09:48.599 }, 00:09:48.599 { 00:09:48.599 "name": null, 00:09:48.599 "uuid": "893e3f13-5953-4898-a823-eb9a7579b38c", 00:09:48.599 "is_configured": false, 00:09:48.599 "data_offset": 0, 00:09:48.599 "data_size": 63488 00:09:48.599 }, 00:09:48.599 { 00:09:48.599 "name": "BaseBdev3", 00:09:48.599 "uuid": "d88a41bb-5625-45d0-96cf-1cfe3d055925", 00:09:48.599 "is_configured": true, 00:09:48.599 "data_offset": 2048, 00:09:48.599 "data_size": 63488 00:09:48.599 } 00:09:48.599 ] 00:09:48.599 }' 00:09:48.599 15:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:48.599 15:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:48.858 15:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:48.858 15:17:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.858 15:17:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:48.858 15:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:48.858 15:17:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.858 15:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:48.858 15:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:48.858 15:17:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.858 15:17:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:48.858 [2024-11-20 15:17:35.324983] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:48.859 15:17:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.859 15:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:48.859 15:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:48.859 15:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:48.859 15:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:48.859 15:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:48.859 15:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:48.859 15:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:48.859 15:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:48.859 15:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:48.859 15:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:48.859 15:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:48.859 15:17:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.859 15:17:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:48.859 15:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:49.119 15:17:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.119 15:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:49.119 "name": "Existed_Raid", 00:09:49.119 "uuid": "9fc0050d-c877-4afe-b950-c5dd3528f541", 00:09:49.119 "strip_size_kb": 0, 00:09:49.119 "state": "configuring", 00:09:49.119 "raid_level": "raid1", 00:09:49.119 "superblock": true, 00:09:49.119 "num_base_bdevs": 3, 00:09:49.119 "num_base_bdevs_discovered": 1, 00:09:49.119 "num_base_bdevs_operational": 3, 00:09:49.119 "base_bdevs_list": [ 00:09:49.119 { 00:09:49.119 "name": "BaseBdev1", 00:09:49.119 "uuid": "f7127189-db40-4b9d-a2bd-0dbd046c8e82", 00:09:49.119 "is_configured": true, 00:09:49.119 "data_offset": 2048, 00:09:49.119 "data_size": 63488 00:09:49.119 }, 00:09:49.119 { 00:09:49.119 "name": null, 00:09:49.119 "uuid": "893e3f13-5953-4898-a823-eb9a7579b38c", 00:09:49.119 "is_configured": false, 00:09:49.119 "data_offset": 0, 00:09:49.119 "data_size": 63488 00:09:49.119 }, 00:09:49.119 { 00:09:49.119 "name": null, 00:09:49.119 "uuid": "d88a41bb-5625-45d0-96cf-1cfe3d055925", 00:09:49.119 "is_configured": false, 00:09:49.119 "data_offset": 0, 00:09:49.119 "data_size": 63488 00:09:49.119 } 00:09:49.119 ] 00:09:49.119 }' 00:09:49.119 15:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:49.119 15:17:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:49.378 15:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:49.378 15:17:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.378 15:17:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:49.379 15:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:49.379 15:17:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.379 15:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:49.379 15:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:49.379 15:17:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.379 15:17:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:49.379 [2024-11-20 15:17:35.764555] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:49.379 15:17:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.379 15:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:49.379 15:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:49.379 15:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:49.379 15:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:49.379 15:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:49.379 15:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:49.379 15:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:49.379 15:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:49.379 15:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:49.379 15:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:49.379 15:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:49.379 15:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:49.379 15:17:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.379 15:17:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:49.379 15:17:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.379 15:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:49.379 "name": "Existed_Raid", 00:09:49.379 "uuid": "9fc0050d-c877-4afe-b950-c5dd3528f541", 00:09:49.379 "strip_size_kb": 0, 00:09:49.379 "state": "configuring", 00:09:49.379 "raid_level": "raid1", 00:09:49.379 "superblock": true, 00:09:49.379 "num_base_bdevs": 3, 00:09:49.379 "num_base_bdevs_discovered": 2, 00:09:49.379 "num_base_bdevs_operational": 3, 00:09:49.379 "base_bdevs_list": [ 00:09:49.379 { 00:09:49.379 "name": "BaseBdev1", 00:09:49.379 "uuid": "f7127189-db40-4b9d-a2bd-0dbd046c8e82", 00:09:49.379 "is_configured": true, 00:09:49.379 "data_offset": 2048, 00:09:49.379 "data_size": 63488 00:09:49.379 }, 00:09:49.379 { 00:09:49.379 "name": null, 00:09:49.379 "uuid": "893e3f13-5953-4898-a823-eb9a7579b38c", 00:09:49.379 "is_configured": false, 00:09:49.379 "data_offset": 0, 00:09:49.379 "data_size": 63488 00:09:49.379 }, 00:09:49.379 { 00:09:49.379 "name": "BaseBdev3", 00:09:49.379 "uuid": "d88a41bb-5625-45d0-96cf-1cfe3d055925", 00:09:49.379 "is_configured": true, 00:09:49.379 "data_offset": 2048, 00:09:49.379 "data_size": 63488 00:09:49.379 } 00:09:49.379 ] 00:09:49.379 }' 00:09:49.379 15:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:49.379 15:17:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:49.947 15:17:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:49.947 15:17:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:49.947 15:17:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.947 15:17:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:49.947 15:17:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.948 15:17:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:49.948 15:17:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:49.948 15:17:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.948 15:17:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:49.948 [2024-11-20 15:17:36.235880] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:49.948 15:17:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.948 15:17:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:49.948 15:17:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:49.948 15:17:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:49.948 15:17:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:49.948 15:17:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:49.948 15:17:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:49.948 15:17:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:49.948 15:17:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:49.948 15:17:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:49.948 15:17:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:49.948 15:17:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:49.948 15:17:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:49.948 15:17:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.948 15:17:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:49.948 15:17:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.948 15:17:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:49.948 "name": "Existed_Raid", 00:09:49.948 "uuid": "9fc0050d-c877-4afe-b950-c5dd3528f541", 00:09:49.948 "strip_size_kb": 0, 00:09:49.948 "state": "configuring", 00:09:49.948 "raid_level": "raid1", 00:09:49.948 "superblock": true, 00:09:49.948 "num_base_bdevs": 3, 00:09:49.948 "num_base_bdevs_discovered": 1, 00:09:49.948 "num_base_bdevs_operational": 3, 00:09:49.948 "base_bdevs_list": [ 00:09:49.948 { 00:09:49.948 "name": null, 00:09:49.948 "uuid": "f7127189-db40-4b9d-a2bd-0dbd046c8e82", 00:09:49.948 "is_configured": false, 00:09:49.948 "data_offset": 0, 00:09:49.948 "data_size": 63488 00:09:49.948 }, 00:09:49.948 { 00:09:49.948 "name": null, 00:09:49.948 "uuid": "893e3f13-5953-4898-a823-eb9a7579b38c", 00:09:49.948 "is_configured": false, 00:09:49.948 "data_offset": 0, 00:09:49.948 "data_size": 63488 00:09:49.948 }, 00:09:49.948 { 00:09:49.948 "name": "BaseBdev3", 00:09:49.948 "uuid": "d88a41bb-5625-45d0-96cf-1cfe3d055925", 00:09:49.948 "is_configured": true, 00:09:49.948 "data_offset": 2048, 00:09:49.948 "data_size": 63488 00:09:49.948 } 00:09:49.948 ] 00:09:49.948 }' 00:09:49.948 15:17:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:49.948 15:17:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:50.516 15:17:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:50.516 15:17:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:50.516 15:17:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.516 15:17:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:50.516 15:17:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.516 15:17:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:50.516 15:17:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:50.516 15:17:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.516 15:17:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:50.516 [2024-11-20 15:17:36.795794] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:50.516 15:17:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.516 15:17:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:50.516 15:17:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:50.516 15:17:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:50.516 15:17:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:50.516 15:17:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:50.516 15:17:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:50.516 15:17:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:50.516 15:17:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:50.516 15:17:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:50.516 15:17:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:50.516 15:17:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:50.516 15:17:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:50.516 15:17:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.516 15:17:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:50.516 15:17:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.516 15:17:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:50.516 "name": "Existed_Raid", 00:09:50.516 "uuid": "9fc0050d-c877-4afe-b950-c5dd3528f541", 00:09:50.516 "strip_size_kb": 0, 00:09:50.516 "state": "configuring", 00:09:50.516 "raid_level": "raid1", 00:09:50.516 "superblock": true, 00:09:50.516 "num_base_bdevs": 3, 00:09:50.516 "num_base_bdevs_discovered": 2, 00:09:50.516 "num_base_bdevs_operational": 3, 00:09:50.516 "base_bdevs_list": [ 00:09:50.516 { 00:09:50.516 "name": null, 00:09:50.516 "uuid": "f7127189-db40-4b9d-a2bd-0dbd046c8e82", 00:09:50.516 "is_configured": false, 00:09:50.516 "data_offset": 0, 00:09:50.516 "data_size": 63488 00:09:50.516 }, 00:09:50.516 { 00:09:50.516 "name": "BaseBdev2", 00:09:50.516 "uuid": "893e3f13-5953-4898-a823-eb9a7579b38c", 00:09:50.516 "is_configured": true, 00:09:50.516 "data_offset": 2048, 00:09:50.516 "data_size": 63488 00:09:50.516 }, 00:09:50.516 { 00:09:50.516 "name": "BaseBdev3", 00:09:50.516 "uuid": "d88a41bb-5625-45d0-96cf-1cfe3d055925", 00:09:50.516 "is_configured": true, 00:09:50.516 "data_offset": 2048, 00:09:50.516 "data_size": 63488 00:09:50.516 } 00:09:50.516 ] 00:09:50.516 }' 00:09:50.516 15:17:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:50.516 15:17:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:50.825 15:17:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:50.825 15:17:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:50.825 15:17:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.825 15:17:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:50.825 15:17:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.825 15:17:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:50.825 15:17:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:50.825 15:17:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.825 15:17:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:50.825 15:17:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:50.825 15:17:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.825 15:17:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u f7127189-db40-4b9d-a2bd-0dbd046c8e82 00:09:50.825 15:17:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.825 15:17:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:51.083 [2024-11-20 15:17:37.340895] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:51.083 [2024-11-20 15:17:37.341121] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:51.083 [2024-11-20 15:17:37.341136] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:51.083 [2024-11-20 15:17:37.341393] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:51.083 NewBaseBdev 00:09:51.083 [2024-11-20 15:17:37.341530] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:51.083 [2024-11-20 15:17:37.341543] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:09:51.083 [2024-11-20 15:17:37.341693] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:51.083 15:17:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.083 15:17:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:51.083 15:17:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:09:51.083 15:17:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:51.083 15:17:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:51.083 15:17:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:51.083 15:17:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:51.083 15:17:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:51.083 15:17:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.083 15:17:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:51.083 15:17:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.083 15:17:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:51.083 15:17:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.083 15:17:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:51.083 [ 00:09:51.083 { 00:09:51.083 "name": "NewBaseBdev", 00:09:51.083 "aliases": [ 00:09:51.083 "f7127189-db40-4b9d-a2bd-0dbd046c8e82" 00:09:51.083 ], 00:09:51.083 "product_name": "Malloc disk", 00:09:51.083 "block_size": 512, 00:09:51.083 "num_blocks": 65536, 00:09:51.083 "uuid": "f7127189-db40-4b9d-a2bd-0dbd046c8e82", 00:09:51.083 "assigned_rate_limits": { 00:09:51.083 "rw_ios_per_sec": 0, 00:09:51.083 "rw_mbytes_per_sec": 0, 00:09:51.083 "r_mbytes_per_sec": 0, 00:09:51.083 "w_mbytes_per_sec": 0 00:09:51.083 }, 00:09:51.083 "claimed": true, 00:09:51.083 "claim_type": "exclusive_write", 00:09:51.083 "zoned": false, 00:09:51.083 "supported_io_types": { 00:09:51.083 "read": true, 00:09:51.083 "write": true, 00:09:51.083 "unmap": true, 00:09:51.083 "flush": true, 00:09:51.083 "reset": true, 00:09:51.083 "nvme_admin": false, 00:09:51.083 "nvme_io": false, 00:09:51.083 "nvme_io_md": false, 00:09:51.083 "write_zeroes": true, 00:09:51.083 "zcopy": true, 00:09:51.083 "get_zone_info": false, 00:09:51.083 "zone_management": false, 00:09:51.083 "zone_append": false, 00:09:51.083 "compare": false, 00:09:51.083 "compare_and_write": false, 00:09:51.083 "abort": true, 00:09:51.083 "seek_hole": false, 00:09:51.083 "seek_data": false, 00:09:51.083 "copy": true, 00:09:51.083 "nvme_iov_md": false 00:09:51.083 }, 00:09:51.083 "memory_domains": [ 00:09:51.083 { 00:09:51.083 "dma_device_id": "system", 00:09:51.083 "dma_device_type": 1 00:09:51.083 }, 00:09:51.083 { 00:09:51.083 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:51.083 "dma_device_type": 2 00:09:51.083 } 00:09:51.083 ], 00:09:51.083 "driver_specific": {} 00:09:51.083 } 00:09:51.083 ] 00:09:51.083 15:17:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.083 15:17:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:51.083 15:17:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:09:51.084 15:17:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:51.084 15:17:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:51.084 15:17:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:51.084 15:17:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:51.084 15:17:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:51.084 15:17:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:51.084 15:17:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:51.084 15:17:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:51.084 15:17:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:51.084 15:17:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:51.084 15:17:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.084 15:17:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:51.084 15:17:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:51.084 15:17:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.084 15:17:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:51.084 "name": "Existed_Raid", 00:09:51.084 "uuid": "9fc0050d-c877-4afe-b950-c5dd3528f541", 00:09:51.084 "strip_size_kb": 0, 00:09:51.084 "state": "online", 00:09:51.084 "raid_level": "raid1", 00:09:51.084 "superblock": true, 00:09:51.084 "num_base_bdevs": 3, 00:09:51.084 "num_base_bdevs_discovered": 3, 00:09:51.084 "num_base_bdevs_operational": 3, 00:09:51.084 "base_bdevs_list": [ 00:09:51.084 { 00:09:51.084 "name": "NewBaseBdev", 00:09:51.084 "uuid": "f7127189-db40-4b9d-a2bd-0dbd046c8e82", 00:09:51.084 "is_configured": true, 00:09:51.084 "data_offset": 2048, 00:09:51.084 "data_size": 63488 00:09:51.084 }, 00:09:51.084 { 00:09:51.084 "name": "BaseBdev2", 00:09:51.084 "uuid": "893e3f13-5953-4898-a823-eb9a7579b38c", 00:09:51.084 "is_configured": true, 00:09:51.084 "data_offset": 2048, 00:09:51.084 "data_size": 63488 00:09:51.084 }, 00:09:51.084 { 00:09:51.084 "name": "BaseBdev3", 00:09:51.084 "uuid": "d88a41bb-5625-45d0-96cf-1cfe3d055925", 00:09:51.084 "is_configured": true, 00:09:51.084 "data_offset": 2048, 00:09:51.084 "data_size": 63488 00:09:51.084 } 00:09:51.084 ] 00:09:51.084 }' 00:09:51.084 15:17:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:51.084 15:17:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:51.342 15:17:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:51.342 15:17:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:51.342 15:17:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:51.342 15:17:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:51.342 15:17:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:51.342 15:17:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:51.342 15:17:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:51.342 15:17:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.342 15:17:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:51.342 15:17:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:51.342 [2024-11-20 15:17:37.745071] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:51.342 15:17:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.342 15:17:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:51.342 "name": "Existed_Raid", 00:09:51.342 "aliases": [ 00:09:51.342 "9fc0050d-c877-4afe-b950-c5dd3528f541" 00:09:51.342 ], 00:09:51.342 "product_name": "Raid Volume", 00:09:51.342 "block_size": 512, 00:09:51.342 "num_blocks": 63488, 00:09:51.342 "uuid": "9fc0050d-c877-4afe-b950-c5dd3528f541", 00:09:51.342 "assigned_rate_limits": { 00:09:51.342 "rw_ios_per_sec": 0, 00:09:51.342 "rw_mbytes_per_sec": 0, 00:09:51.342 "r_mbytes_per_sec": 0, 00:09:51.342 "w_mbytes_per_sec": 0 00:09:51.342 }, 00:09:51.342 "claimed": false, 00:09:51.342 "zoned": false, 00:09:51.342 "supported_io_types": { 00:09:51.342 "read": true, 00:09:51.342 "write": true, 00:09:51.342 "unmap": false, 00:09:51.342 "flush": false, 00:09:51.342 "reset": true, 00:09:51.342 "nvme_admin": false, 00:09:51.342 "nvme_io": false, 00:09:51.342 "nvme_io_md": false, 00:09:51.342 "write_zeroes": true, 00:09:51.342 "zcopy": false, 00:09:51.342 "get_zone_info": false, 00:09:51.342 "zone_management": false, 00:09:51.342 "zone_append": false, 00:09:51.342 "compare": false, 00:09:51.342 "compare_and_write": false, 00:09:51.342 "abort": false, 00:09:51.342 "seek_hole": false, 00:09:51.342 "seek_data": false, 00:09:51.342 "copy": false, 00:09:51.342 "nvme_iov_md": false 00:09:51.342 }, 00:09:51.342 "memory_domains": [ 00:09:51.342 { 00:09:51.342 "dma_device_id": "system", 00:09:51.342 "dma_device_type": 1 00:09:51.342 }, 00:09:51.342 { 00:09:51.342 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:51.342 "dma_device_type": 2 00:09:51.342 }, 00:09:51.342 { 00:09:51.342 "dma_device_id": "system", 00:09:51.342 "dma_device_type": 1 00:09:51.342 }, 00:09:51.342 { 00:09:51.342 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:51.342 "dma_device_type": 2 00:09:51.342 }, 00:09:51.342 { 00:09:51.342 "dma_device_id": "system", 00:09:51.342 "dma_device_type": 1 00:09:51.342 }, 00:09:51.342 { 00:09:51.342 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:51.342 "dma_device_type": 2 00:09:51.342 } 00:09:51.342 ], 00:09:51.342 "driver_specific": { 00:09:51.342 "raid": { 00:09:51.343 "uuid": "9fc0050d-c877-4afe-b950-c5dd3528f541", 00:09:51.343 "strip_size_kb": 0, 00:09:51.343 "state": "online", 00:09:51.343 "raid_level": "raid1", 00:09:51.343 "superblock": true, 00:09:51.343 "num_base_bdevs": 3, 00:09:51.343 "num_base_bdevs_discovered": 3, 00:09:51.343 "num_base_bdevs_operational": 3, 00:09:51.343 "base_bdevs_list": [ 00:09:51.343 { 00:09:51.343 "name": "NewBaseBdev", 00:09:51.343 "uuid": "f7127189-db40-4b9d-a2bd-0dbd046c8e82", 00:09:51.343 "is_configured": true, 00:09:51.343 "data_offset": 2048, 00:09:51.343 "data_size": 63488 00:09:51.343 }, 00:09:51.343 { 00:09:51.343 "name": "BaseBdev2", 00:09:51.343 "uuid": "893e3f13-5953-4898-a823-eb9a7579b38c", 00:09:51.343 "is_configured": true, 00:09:51.343 "data_offset": 2048, 00:09:51.343 "data_size": 63488 00:09:51.343 }, 00:09:51.343 { 00:09:51.343 "name": "BaseBdev3", 00:09:51.343 "uuid": "d88a41bb-5625-45d0-96cf-1cfe3d055925", 00:09:51.343 "is_configured": true, 00:09:51.343 "data_offset": 2048, 00:09:51.343 "data_size": 63488 00:09:51.343 } 00:09:51.343 ] 00:09:51.343 } 00:09:51.343 } 00:09:51.343 }' 00:09:51.343 15:17:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:51.343 15:17:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:51.343 BaseBdev2 00:09:51.343 BaseBdev3' 00:09:51.343 15:17:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:51.602 15:17:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:51.602 15:17:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:51.602 15:17:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:51.602 15:17:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.603 15:17:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:51.603 15:17:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:51.603 15:17:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.603 15:17:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:51.603 15:17:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:51.603 15:17:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:51.603 15:17:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:51.603 15:17:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:51.603 15:17:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.603 15:17:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:51.603 15:17:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.603 15:17:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:51.603 15:17:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:51.603 15:17:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:51.603 15:17:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:51.603 15:17:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.603 15:17:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:51.603 15:17:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:51.603 15:17:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.603 15:17:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:51.603 15:17:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:51.603 15:17:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:51.603 15:17:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.603 15:17:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:51.603 [2024-11-20 15:17:38.000566] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:51.603 [2024-11-20 15:17:38.000600] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:51.603 [2024-11-20 15:17:38.000684] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:51.603 [2024-11-20 15:17:38.000962] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:51.603 [2024-11-20 15:17:38.000975] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:09:51.603 15:17:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.603 15:17:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 67879 00:09:51.603 15:17:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 67879 ']' 00:09:51.603 15:17:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 67879 00:09:51.603 15:17:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:09:51.603 15:17:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:51.603 15:17:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67879 00:09:51.603 15:17:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:51.603 15:17:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:51.603 killing process with pid 67879 00:09:51.603 15:17:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67879' 00:09:51.603 15:17:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 67879 00:09:51.603 [2024-11-20 15:17:38.056873] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:51.603 15:17:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 67879 00:09:52.170 [2024-11-20 15:17:38.362079] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:53.107 15:17:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:09:53.107 00:09:53.107 real 0m10.262s 00:09:53.107 user 0m16.237s 00:09:53.107 sys 0m2.058s 00:09:53.107 15:17:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:53.107 15:17:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:53.107 ************************************ 00:09:53.107 END TEST raid_state_function_test_sb 00:09:53.107 ************************************ 00:09:53.107 15:17:39 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:09:53.107 15:17:39 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:53.107 15:17:39 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:53.107 15:17:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:53.107 ************************************ 00:09:53.107 START TEST raid_superblock_test 00:09:53.107 ************************************ 00:09:53.107 15:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 3 00:09:53.107 15:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:09:53.107 15:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:09:53.107 15:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:09:53.107 15:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:09:53.107 15:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:09:53.107 15:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:09:53.107 15:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:09:53.107 15:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:09:53.107 15:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:09:53.107 15:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:09:53.107 15:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:09:53.107 15:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:09:53.107 15:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:09:53.107 15:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:09:53.107 15:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:09:53.107 15:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=68494 00:09:53.107 15:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 68494 00:09:53.107 15:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:09:53.107 15:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 68494 ']' 00:09:53.107 15:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:53.107 15:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:53.107 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:53.107 15:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:53.107 15:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:53.107 15:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.365 [2024-11-20 15:17:39.669971] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:09:53.365 [2024-11-20 15:17:39.670233] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68494 ] 00:09:53.623 [2024-11-20 15:17:39.851748] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:53.623 [2024-11-20 15:17:39.965690] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:53.882 [2024-11-20 15:17:40.169209] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:53.882 [2024-11-20 15:17:40.169273] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:54.141 15:17:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:54.141 15:17:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:09:54.141 15:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:09:54.141 15:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:54.141 15:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:09:54.141 15:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:09:54.141 15:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:09:54.141 15:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:54.141 15:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:54.141 15:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:54.141 15:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:09:54.141 15:17:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.141 15:17:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.141 malloc1 00:09:54.142 15:17:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.142 15:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:54.142 15:17:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.142 15:17:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.142 [2024-11-20 15:17:40.557829] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:54.142 [2024-11-20 15:17:40.558033] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:54.142 [2024-11-20 15:17:40.558096] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:09:54.142 [2024-11-20 15:17:40.558195] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:54.142 [2024-11-20 15:17:40.560711] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:54.142 [2024-11-20 15:17:40.560858] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:54.142 pt1 00:09:54.142 15:17:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.142 15:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:54.142 15:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:54.142 15:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:09:54.142 15:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:09:54.142 15:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:09:54.142 15:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:54.142 15:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:54.142 15:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:54.142 15:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:09:54.142 15:17:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.142 15:17:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.142 malloc2 00:09:54.142 15:17:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.142 15:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:54.142 15:17:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.142 15:17:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.142 [2024-11-20 15:17:40.616617] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:54.142 [2024-11-20 15:17:40.616690] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:54.142 [2024-11-20 15:17:40.616720] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:09:54.142 [2024-11-20 15:17:40.616732] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:54.142 [2024-11-20 15:17:40.619077] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:54.142 [2024-11-20 15:17:40.619223] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:54.142 pt2 00:09:54.401 15:17:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.401 15:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:54.401 15:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:54.401 15:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:09:54.401 15:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:09:54.401 15:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:09:54.401 15:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:54.401 15:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:54.401 15:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:54.401 15:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:09:54.401 15:17:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.401 15:17:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.401 malloc3 00:09:54.401 15:17:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.401 15:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:54.401 15:17:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.401 15:17:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.401 [2024-11-20 15:17:40.687429] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:54.401 [2024-11-20 15:17:40.687595] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:54.401 [2024-11-20 15:17:40.687627] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:09:54.401 [2024-11-20 15:17:40.687639] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:54.401 [2024-11-20 15:17:40.690014] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:54.401 [2024-11-20 15:17:40.690054] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:54.401 pt3 00:09:54.401 15:17:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.401 15:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:54.401 15:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:54.401 15:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:09:54.401 15:17:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.401 15:17:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.401 [2024-11-20 15:17:40.699463] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:54.401 [2024-11-20 15:17:40.701739] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:54.401 [2024-11-20 15:17:40.701924] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:54.401 [2024-11-20 15:17:40.702095] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:09:54.401 [2024-11-20 15:17:40.702117] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:54.401 [2024-11-20 15:17:40.702373] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:54.401 [2024-11-20 15:17:40.702546] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:09:54.401 [2024-11-20 15:17:40.702560] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:09:54.401 [2024-11-20 15:17:40.702737] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:54.401 15:17:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.402 15:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:09:54.402 15:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:54.402 15:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:54.402 15:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:54.402 15:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:54.402 15:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:54.402 15:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:54.402 15:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:54.402 15:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:54.402 15:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:54.402 15:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:54.402 15:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:54.402 15:17:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.402 15:17:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.402 15:17:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.402 15:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:54.402 "name": "raid_bdev1", 00:09:54.402 "uuid": "3011bf8c-cb91-432f-8e7d-f91d573fe06b", 00:09:54.402 "strip_size_kb": 0, 00:09:54.402 "state": "online", 00:09:54.402 "raid_level": "raid1", 00:09:54.402 "superblock": true, 00:09:54.402 "num_base_bdevs": 3, 00:09:54.402 "num_base_bdevs_discovered": 3, 00:09:54.402 "num_base_bdevs_operational": 3, 00:09:54.402 "base_bdevs_list": [ 00:09:54.402 { 00:09:54.402 "name": "pt1", 00:09:54.402 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:54.402 "is_configured": true, 00:09:54.402 "data_offset": 2048, 00:09:54.402 "data_size": 63488 00:09:54.402 }, 00:09:54.402 { 00:09:54.402 "name": "pt2", 00:09:54.402 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:54.402 "is_configured": true, 00:09:54.402 "data_offset": 2048, 00:09:54.402 "data_size": 63488 00:09:54.402 }, 00:09:54.402 { 00:09:54.402 "name": "pt3", 00:09:54.402 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:54.402 "is_configured": true, 00:09:54.402 "data_offset": 2048, 00:09:54.402 "data_size": 63488 00:09:54.402 } 00:09:54.402 ] 00:09:54.402 }' 00:09:54.402 15:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:54.402 15:17:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.661 15:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:09:54.661 15:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:54.661 15:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:54.661 15:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:54.661 15:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:54.661 15:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:54.661 15:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:54.661 15:17:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.661 15:17:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.661 15:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:54.661 [2024-11-20 15:17:41.079368] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:54.661 15:17:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.661 15:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:54.661 "name": "raid_bdev1", 00:09:54.661 "aliases": [ 00:09:54.661 "3011bf8c-cb91-432f-8e7d-f91d573fe06b" 00:09:54.661 ], 00:09:54.661 "product_name": "Raid Volume", 00:09:54.661 "block_size": 512, 00:09:54.661 "num_blocks": 63488, 00:09:54.661 "uuid": "3011bf8c-cb91-432f-8e7d-f91d573fe06b", 00:09:54.661 "assigned_rate_limits": { 00:09:54.661 "rw_ios_per_sec": 0, 00:09:54.661 "rw_mbytes_per_sec": 0, 00:09:54.661 "r_mbytes_per_sec": 0, 00:09:54.661 "w_mbytes_per_sec": 0 00:09:54.661 }, 00:09:54.661 "claimed": false, 00:09:54.661 "zoned": false, 00:09:54.661 "supported_io_types": { 00:09:54.661 "read": true, 00:09:54.661 "write": true, 00:09:54.661 "unmap": false, 00:09:54.661 "flush": false, 00:09:54.661 "reset": true, 00:09:54.661 "nvme_admin": false, 00:09:54.661 "nvme_io": false, 00:09:54.661 "nvme_io_md": false, 00:09:54.661 "write_zeroes": true, 00:09:54.661 "zcopy": false, 00:09:54.661 "get_zone_info": false, 00:09:54.661 "zone_management": false, 00:09:54.661 "zone_append": false, 00:09:54.662 "compare": false, 00:09:54.662 "compare_and_write": false, 00:09:54.662 "abort": false, 00:09:54.662 "seek_hole": false, 00:09:54.662 "seek_data": false, 00:09:54.662 "copy": false, 00:09:54.662 "nvme_iov_md": false 00:09:54.662 }, 00:09:54.662 "memory_domains": [ 00:09:54.662 { 00:09:54.662 "dma_device_id": "system", 00:09:54.662 "dma_device_type": 1 00:09:54.662 }, 00:09:54.662 { 00:09:54.662 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:54.662 "dma_device_type": 2 00:09:54.662 }, 00:09:54.662 { 00:09:54.662 "dma_device_id": "system", 00:09:54.662 "dma_device_type": 1 00:09:54.662 }, 00:09:54.662 { 00:09:54.662 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:54.662 "dma_device_type": 2 00:09:54.662 }, 00:09:54.662 { 00:09:54.662 "dma_device_id": "system", 00:09:54.662 "dma_device_type": 1 00:09:54.662 }, 00:09:54.662 { 00:09:54.662 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:54.662 "dma_device_type": 2 00:09:54.662 } 00:09:54.662 ], 00:09:54.662 "driver_specific": { 00:09:54.662 "raid": { 00:09:54.662 "uuid": "3011bf8c-cb91-432f-8e7d-f91d573fe06b", 00:09:54.662 "strip_size_kb": 0, 00:09:54.662 "state": "online", 00:09:54.662 "raid_level": "raid1", 00:09:54.662 "superblock": true, 00:09:54.662 "num_base_bdevs": 3, 00:09:54.662 "num_base_bdevs_discovered": 3, 00:09:54.662 "num_base_bdevs_operational": 3, 00:09:54.662 "base_bdevs_list": [ 00:09:54.662 { 00:09:54.662 "name": "pt1", 00:09:54.662 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:54.662 "is_configured": true, 00:09:54.662 "data_offset": 2048, 00:09:54.662 "data_size": 63488 00:09:54.662 }, 00:09:54.662 { 00:09:54.662 "name": "pt2", 00:09:54.662 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:54.662 "is_configured": true, 00:09:54.662 "data_offset": 2048, 00:09:54.662 "data_size": 63488 00:09:54.662 }, 00:09:54.662 { 00:09:54.662 "name": "pt3", 00:09:54.662 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:54.662 "is_configured": true, 00:09:54.662 "data_offset": 2048, 00:09:54.662 "data_size": 63488 00:09:54.662 } 00:09:54.662 ] 00:09:54.662 } 00:09:54.662 } 00:09:54.662 }' 00:09:54.662 15:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:54.920 15:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:54.920 pt2 00:09:54.920 pt3' 00:09:54.920 15:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:54.920 15:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:54.920 15:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:54.920 15:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:54.920 15:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:54.920 15:17:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.920 15:17:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.920 15:17:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.920 15:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:54.920 15:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:54.920 15:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:54.920 15:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:54.920 15:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:54.920 15:17:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.920 15:17:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.920 15:17:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.920 15:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:54.920 15:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:54.920 15:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:54.920 15:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:54.920 15:17:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.920 15:17:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.920 15:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:54.920 15:17:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.920 15:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:54.920 15:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:54.920 15:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:54.920 15:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:09:54.920 15:17:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.920 15:17:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.920 [2024-11-20 15:17:41.355320] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:54.920 15:17:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.920 15:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=3011bf8c-cb91-432f-8e7d-f91d573fe06b 00:09:54.921 15:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 3011bf8c-cb91-432f-8e7d-f91d573fe06b ']' 00:09:54.921 15:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:54.921 15:17:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.921 15:17:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.921 [2024-11-20 15:17:41.395055] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:54.921 [2024-11-20 15:17:41.395185] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:54.921 [2024-11-20 15:17:41.395275] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:54.921 [2024-11-20 15:17:41.395349] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:54.921 [2024-11-20 15:17:41.395360] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:09:54.921 15:17:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.180 15:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:09:55.180 15:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:55.180 15:17:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.180 15:17:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.180 15:17:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.180 15:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:09:55.180 15:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:09:55.180 15:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:55.180 15:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:09:55.180 15:17:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.180 15:17:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.180 15:17:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.180 15:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:55.180 15:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:09:55.180 15:17:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.180 15:17:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.180 15:17:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.180 15:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:55.180 15:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:09:55.180 15:17:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.180 15:17:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.180 15:17:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.180 15:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:09:55.180 15:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:09:55.180 15:17:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.180 15:17:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.180 15:17:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.180 15:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:09:55.180 15:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:55.180 15:17:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:09:55.180 15:17:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:55.180 15:17:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:09:55.180 15:17:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:55.180 15:17:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:09:55.180 15:17:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:55.180 15:17:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:55.180 15:17:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.180 15:17:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.180 [2024-11-20 15:17:41.531103] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:09:55.180 [2024-11-20 15:17:41.533220] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:09:55.180 [2024-11-20 15:17:41.533280] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:09:55.180 [2024-11-20 15:17:41.533330] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:09:55.180 [2024-11-20 15:17:41.533386] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:09:55.180 [2024-11-20 15:17:41.533408] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:09:55.180 [2024-11-20 15:17:41.533429] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:55.180 [2024-11-20 15:17:41.533439] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:09:55.180 request: 00:09:55.180 { 00:09:55.180 "name": "raid_bdev1", 00:09:55.180 "raid_level": "raid1", 00:09:55.180 "base_bdevs": [ 00:09:55.180 "malloc1", 00:09:55.180 "malloc2", 00:09:55.180 "malloc3" 00:09:55.180 ], 00:09:55.180 "superblock": false, 00:09:55.180 "method": "bdev_raid_create", 00:09:55.180 "req_id": 1 00:09:55.180 } 00:09:55.180 Got JSON-RPC error response 00:09:55.180 response: 00:09:55.180 { 00:09:55.180 "code": -17, 00:09:55.180 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:09:55.180 } 00:09:55.180 15:17:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:55.180 15:17:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:09:55.180 15:17:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:55.180 15:17:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:55.181 15:17:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:55.181 15:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:55.181 15:17:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.181 15:17:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.181 15:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:09:55.181 15:17:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.181 15:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:09:55.181 15:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:09:55.181 15:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:55.181 15:17:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.181 15:17:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.181 [2024-11-20 15:17:41.594986] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:55.181 [2024-11-20 15:17:41.595157] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:55.181 [2024-11-20 15:17:41.595189] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:09:55.181 [2024-11-20 15:17:41.595201] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:55.181 [2024-11-20 15:17:41.597749] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:55.181 [2024-11-20 15:17:41.597785] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:55.181 [2024-11-20 15:17:41.597864] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:55.181 [2024-11-20 15:17:41.597919] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:55.181 pt1 00:09:55.181 15:17:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.181 15:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:09:55.181 15:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:55.181 15:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:55.181 15:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:55.181 15:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:55.181 15:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:55.181 15:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:55.181 15:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:55.181 15:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:55.181 15:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:55.181 15:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:55.181 15:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:55.181 15:17:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.181 15:17:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.181 15:17:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.181 15:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:55.181 "name": "raid_bdev1", 00:09:55.181 "uuid": "3011bf8c-cb91-432f-8e7d-f91d573fe06b", 00:09:55.181 "strip_size_kb": 0, 00:09:55.181 "state": "configuring", 00:09:55.181 "raid_level": "raid1", 00:09:55.181 "superblock": true, 00:09:55.181 "num_base_bdevs": 3, 00:09:55.181 "num_base_bdevs_discovered": 1, 00:09:55.181 "num_base_bdevs_operational": 3, 00:09:55.181 "base_bdevs_list": [ 00:09:55.181 { 00:09:55.181 "name": "pt1", 00:09:55.181 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:55.181 "is_configured": true, 00:09:55.181 "data_offset": 2048, 00:09:55.181 "data_size": 63488 00:09:55.181 }, 00:09:55.181 { 00:09:55.181 "name": null, 00:09:55.181 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:55.181 "is_configured": false, 00:09:55.181 "data_offset": 2048, 00:09:55.181 "data_size": 63488 00:09:55.181 }, 00:09:55.181 { 00:09:55.181 "name": null, 00:09:55.181 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:55.181 "is_configured": false, 00:09:55.181 "data_offset": 2048, 00:09:55.181 "data_size": 63488 00:09:55.181 } 00:09:55.181 ] 00:09:55.181 }' 00:09:55.181 15:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:55.181 15:17:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.748 15:17:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:09:55.748 15:17:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:55.748 15:17:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.748 15:17:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.748 [2024-11-20 15:17:42.022413] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:55.748 [2024-11-20 15:17:42.022609] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:55.748 [2024-11-20 15:17:42.022681] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:09:55.748 [2024-11-20 15:17:42.022800] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:55.748 [2024-11-20 15:17:42.023286] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:55.748 [2024-11-20 15:17:42.023412] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:55.748 [2024-11-20 15:17:42.023609] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:55.748 [2024-11-20 15:17:42.023745] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:55.748 pt2 00:09:55.748 15:17:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.748 15:17:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:09:55.748 15:17:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.748 15:17:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.748 [2024-11-20 15:17:42.030393] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:09:55.748 15:17:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.748 15:17:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:09:55.748 15:17:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:55.748 15:17:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:55.748 15:17:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:55.748 15:17:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:55.748 15:17:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:55.748 15:17:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:55.748 15:17:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:55.748 15:17:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:55.748 15:17:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:55.748 15:17:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:55.748 15:17:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:55.748 15:17:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.748 15:17:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.748 15:17:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.748 15:17:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:55.748 "name": "raid_bdev1", 00:09:55.748 "uuid": "3011bf8c-cb91-432f-8e7d-f91d573fe06b", 00:09:55.748 "strip_size_kb": 0, 00:09:55.748 "state": "configuring", 00:09:55.748 "raid_level": "raid1", 00:09:55.748 "superblock": true, 00:09:55.748 "num_base_bdevs": 3, 00:09:55.748 "num_base_bdevs_discovered": 1, 00:09:55.748 "num_base_bdevs_operational": 3, 00:09:55.748 "base_bdevs_list": [ 00:09:55.748 { 00:09:55.748 "name": "pt1", 00:09:55.748 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:55.748 "is_configured": true, 00:09:55.748 "data_offset": 2048, 00:09:55.748 "data_size": 63488 00:09:55.748 }, 00:09:55.748 { 00:09:55.748 "name": null, 00:09:55.748 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:55.748 "is_configured": false, 00:09:55.748 "data_offset": 0, 00:09:55.748 "data_size": 63488 00:09:55.748 }, 00:09:55.748 { 00:09:55.748 "name": null, 00:09:55.748 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:55.748 "is_configured": false, 00:09:55.748 "data_offset": 2048, 00:09:55.748 "data_size": 63488 00:09:55.748 } 00:09:55.748 ] 00:09:55.748 }' 00:09:55.748 15:17:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:55.748 15:17:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.007 15:17:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:09:56.007 15:17:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:56.007 15:17:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:56.007 15:17:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.007 15:17:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.007 [2024-11-20 15:17:42.449792] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:56.007 [2024-11-20 15:17:42.449870] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:56.007 [2024-11-20 15:17:42.449891] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:09:56.007 [2024-11-20 15:17:42.449906] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:56.008 [2024-11-20 15:17:42.450368] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:56.008 [2024-11-20 15:17:42.450390] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:56.008 [2024-11-20 15:17:42.450470] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:56.008 [2024-11-20 15:17:42.450504] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:56.008 pt2 00:09:56.008 15:17:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.008 15:17:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:56.008 15:17:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:56.008 15:17:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:56.008 15:17:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.008 15:17:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.008 [2024-11-20 15:17:42.457773] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:56.008 [2024-11-20 15:17:42.457826] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:56.008 [2024-11-20 15:17:42.457843] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:09:56.008 [2024-11-20 15:17:42.457856] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:56.008 [2024-11-20 15:17:42.458226] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:56.008 [2024-11-20 15:17:42.458250] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:56.008 [2024-11-20 15:17:42.458311] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:56.008 [2024-11-20 15:17:42.458333] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:56.008 [2024-11-20 15:17:42.458445] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:56.008 [2024-11-20 15:17:42.458459] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:56.008 [2024-11-20 15:17:42.458712] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:09:56.008 [2024-11-20 15:17:42.458859] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:56.008 [2024-11-20 15:17:42.458869] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:56.008 [2024-11-20 15:17:42.459017] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:56.008 pt3 00:09:56.008 15:17:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.008 15:17:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:56.008 15:17:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:56.008 15:17:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:09:56.008 15:17:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:56.008 15:17:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:56.008 15:17:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:56.008 15:17:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:56.008 15:17:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:56.008 15:17:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:56.008 15:17:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:56.008 15:17:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:56.008 15:17:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:56.008 15:17:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:56.008 15:17:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:56.008 15:17:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.008 15:17:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.267 15:17:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.267 15:17:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:56.267 "name": "raid_bdev1", 00:09:56.267 "uuid": "3011bf8c-cb91-432f-8e7d-f91d573fe06b", 00:09:56.267 "strip_size_kb": 0, 00:09:56.267 "state": "online", 00:09:56.267 "raid_level": "raid1", 00:09:56.267 "superblock": true, 00:09:56.267 "num_base_bdevs": 3, 00:09:56.267 "num_base_bdevs_discovered": 3, 00:09:56.267 "num_base_bdevs_operational": 3, 00:09:56.267 "base_bdevs_list": [ 00:09:56.267 { 00:09:56.267 "name": "pt1", 00:09:56.267 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:56.267 "is_configured": true, 00:09:56.267 "data_offset": 2048, 00:09:56.267 "data_size": 63488 00:09:56.267 }, 00:09:56.267 { 00:09:56.267 "name": "pt2", 00:09:56.267 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:56.267 "is_configured": true, 00:09:56.267 "data_offset": 2048, 00:09:56.267 "data_size": 63488 00:09:56.267 }, 00:09:56.267 { 00:09:56.267 "name": "pt3", 00:09:56.267 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:56.267 "is_configured": true, 00:09:56.267 "data_offset": 2048, 00:09:56.267 "data_size": 63488 00:09:56.267 } 00:09:56.267 ] 00:09:56.267 }' 00:09:56.267 15:17:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:56.267 15:17:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.527 15:17:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:09:56.527 15:17:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:56.527 15:17:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:56.527 15:17:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:56.527 15:17:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:56.527 15:17:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:56.527 15:17:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:56.527 15:17:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.527 15:17:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:56.527 15:17:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.527 [2024-11-20 15:17:42.846082] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:56.527 15:17:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.527 15:17:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:56.527 "name": "raid_bdev1", 00:09:56.527 "aliases": [ 00:09:56.527 "3011bf8c-cb91-432f-8e7d-f91d573fe06b" 00:09:56.527 ], 00:09:56.527 "product_name": "Raid Volume", 00:09:56.527 "block_size": 512, 00:09:56.527 "num_blocks": 63488, 00:09:56.527 "uuid": "3011bf8c-cb91-432f-8e7d-f91d573fe06b", 00:09:56.527 "assigned_rate_limits": { 00:09:56.527 "rw_ios_per_sec": 0, 00:09:56.527 "rw_mbytes_per_sec": 0, 00:09:56.527 "r_mbytes_per_sec": 0, 00:09:56.527 "w_mbytes_per_sec": 0 00:09:56.527 }, 00:09:56.527 "claimed": false, 00:09:56.527 "zoned": false, 00:09:56.527 "supported_io_types": { 00:09:56.527 "read": true, 00:09:56.527 "write": true, 00:09:56.527 "unmap": false, 00:09:56.527 "flush": false, 00:09:56.527 "reset": true, 00:09:56.527 "nvme_admin": false, 00:09:56.527 "nvme_io": false, 00:09:56.527 "nvme_io_md": false, 00:09:56.527 "write_zeroes": true, 00:09:56.527 "zcopy": false, 00:09:56.527 "get_zone_info": false, 00:09:56.527 "zone_management": false, 00:09:56.527 "zone_append": false, 00:09:56.527 "compare": false, 00:09:56.527 "compare_and_write": false, 00:09:56.527 "abort": false, 00:09:56.527 "seek_hole": false, 00:09:56.527 "seek_data": false, 00:09:56.527 "copy": false, 00:09:56.527 "nvme_iov_md": false 00:09:56.527 }, 00:09:56.527 "memory_domains": [ 00:09:56.527 { 00:09:56.527 "dma_device_id": "system", 00:09:56.527 "dma_device_type": 1 00:09:56.527 }, 00:09:56.527 { 00:09:56.527 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:56.527 "dma_device_type": 2 00:09:56.527 }, 00:09:56.527 { 00:09:56.527 "dma_device_id": "system", 00:09:56.527 "dma_device_type": 1 00:09:56.527 }, 00:09:56.527 { 00:09:56.527 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:56.527 "dma_device_type": 2 00:09:56.527 }, 00:09:56.527 { 00:09:56.527 "dma_device_id": "system", 00:09:56.527 "dma_device_type": 1 00:09:56.527 }, 00:09:56.527 { 00:09:56.527 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:56.527 "dma_device_type": 2 00:09:56.527 } 00:09:56.527 ], 00:09:56.527 "driver_specific": { 00:09:56.527 "raid": { 00:09:56.527 "uuid": "3011bf8c-cb91-432f-8e7d-f91d573fe06b", 00:09:56.527 "strip_size_kb": 0, 00:09:56.527 "state": "online", 00:09:56.527 "raid_level": "raid1", 00:09:56.527 "superblock": true, 00:09:56.527 "num_base_bdevs": 3, 00:09:56.527 "num_base_bdevs_discovered": 3, 00:09:56.527 "num_base_bdevs_operational": 3, 00:09:56.527 "base_bdevs_list": [ 00:09:56.527 { 00:09:56.527 "name": "pt1", 00:09:56.527 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:56.527 "is_configured": true, 00:09:56.527 "data_offset": 2048, 00:09:56.527 "data_size": 63488 00:09:56.527 }, 00:09:56.527 { 00:09:56.527 "name": "pt2", 00:09:56.527 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:56.527 "is_configured": true, 00:09:56.527 "data_offset": 2048, 00:09:56.527 "data_size": 63488 00:09:56.527 }, 00:09:56.527 { 00:09:56.527 "name": "pt3", 00:09:56.527 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:56.527 "is_configured": true, 00:09:56.527 "data_offset": 2048, 00:09:56.527 "data_size": 63488 00:09:56.527 } 00:09:56.527 ] 00:09:56.527 } 00:09:56.527 } 00:09:56.527 }' 00:09:56.527 15:17:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:56.527 15:17:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:56.527 pt2 00:09:56.527 pt3' 00:09:56.527 15:17:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:56.527 15:17:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:56.527 15:17:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:56.527 15:17:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:56.527 15:17:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.527 15:17:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.527 15:17:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:56.527 15:17:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.527 15:17:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:56.527 15:17:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:56.527 15:17:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:56.527 15:17:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:56.527 15:17:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:56.527 15:17:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.527 15:17:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.786 15:17:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.786 15:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:56.786 15:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:56.786 15:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:56.786 15:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:56.786 15:17:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.786 15:17:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.786 15:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:56.786 15:17:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.786 15:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:56.786 15:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:56.787 15:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:09:56.787 15:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:56.787 15:17:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.787 15:17:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.787 [2024-11-20 15:17:43.106030] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:56.787 15:17:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.787 15:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 3011bf8c-cb91-432f-8e7d-f91d573fe06b '!=' 3011bf8c-cb91-432f-8e7d-f91d573fe06b ']' 00:09:56.787 15:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:09:56.787 15:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:56.787 15:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:56.787 15:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:09:56.787 15:17:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.787 15:17:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.787 [2024-11-20 15:17:43.149796] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:09:56.787 15:17:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.787 15:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:56.787 15:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:56.787 15:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:56.787 15:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:56.787 15:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:56.787 15:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:56.787 15:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:56.787 15:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:56.787 15:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:56.787 15:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:56.787 15:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:56.787 15:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:56.787 15:17:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.787 15:17:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.787 15:17:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.787 15:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:56.787 "name": "raid_bdev1", 00:09:56.787 "uuid": "3011bf8c-cb91-432f-8e7d-f91d573fe06b", 00:09:56.787 "strip_size_kb": 0, 00:09:56.787 "state": "online", 00:09:56.787 "raid_level": "raid1", 00:09:56.787 "superblock": true, 00:09:56.787 "num_base_bdevs": 3, 00:09:56.787 "num_base_bdevs_discovered": 2, 00:09:56.787 "num_base_bdevs_operational": 2, 00:09:56.787 "base_bdevs_list": [ 00:09:56.787 { 00:09:56.787 "name": null, 00:09:56.787 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:56.787 "is_configured": false, 00:09:56.787 "data_offset": 0, 00:09:56.787 "data_size": 63488 00:09:56.787 }, 00:09:56.787 { 00:09:56.787 "name": "pt2", 00:09:56.787 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:56.787 "is_configured": true, 00:09:56.787 "data_offset": 2048, 00:09:56.787 "data_size": 63488 00:09:56.787 }, 00:09:56.787 { 00:09:56.787 "name": "pt3", 00:09:56.787 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:56.787 "is_configured": true, 00:09:56.787 "data_offset": 2048, 00:09:56.787 "data_size": 63488 00:09:56.787 } 00:09:56.787 ] 00:09:56.787 }' 00:09:56.787 15:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:56.787 15:17:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.353 15:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:57.353 15:17:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.353 15:17:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.353 [2024-11-20 15:17:43.553164] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:57.353 [2024-11-20 15:17:43.553321] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:57.353 [2024-11-20 15:17:43.553497] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:57.353 [2024-11-20 15:17:43.553696] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:57.353 [2024-11-20 15:17:43.553817] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:57.353 15:17:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.353 15:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:57.353 15:17:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.353 15:17:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.353 15:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:09:57.354 15:17:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.354 15:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:09:57.354 15:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:09:57.354 15:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:09:57.354 15:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:09:57.354 15:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:09:57.354 15:17:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.354 15:17:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.354 15:17:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.354 15:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:09:57.354 15:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:09:57.354 15:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:09:57.354 15:17:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.354 15:17:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.354 15:17:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.354 15:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:09:57.354 15:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:09:57.354 15:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:09:57.354 15:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:09:57.354 15:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:57.354 15:17:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.354 15:17:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.354 [2024-11-20 15:17:43.625023] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:57.354 [2024-11-20 15:17:43.625184] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:57.354 [2024-11-20 15:17:43.625210] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:09:57.354 [2024-11-20 15:17:43.625224] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:57.354 [2024-11-20 15:17:43.627650] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:57.354 [2024-11-20 15:17:43.627704] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:57.354 [2024-11-20 15:17:43.627785] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:57.354 [2024-11-20 15:17:43.627841] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:57.354 pt2 00:09:57.354 15:17:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.354 15:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:09:57.354 15:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:57.354 15:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:57.354 15:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:57.354 15:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:57.354 15:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:57.354 15:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:57.354 15:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:57.354 15:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:57.354 15:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:57.354 15:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:57.354 15:17:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.354 15:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:57.354 15:17:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.354 15:17:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.354 15:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:57.354 "name": "raid_bdev1", 00:09:57.354 "uuid": "3011bf8c-cb91-432f-8e7d-f91d573fe06b", 00:09:57.354 "strip_size_kb": 0, 00:09:57.354 "state": "configuring", 00:09:57.354 "raid_level": "raid1", 00:09:57.354 "superblock": true, 00:09:57.354 "num_base_bdevs": 3, 00:09:57.354 "num_base_bdevs_discovered": 1, 00:09:57.354 "num_base_bdevs_operational": 2, 00:09:57.354 "base_bdevs_list": [ 00:09:57.354 { 00:09:57.354 "name": null, 00:09:57.354 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:57.354 "is_configured": false, 00:09:57.354 "data_offset": 2048, 00:09:57.354 "data_size": 63488 00:09:57.354 }, 00:09:57.354 { 00:09:57.354 "name": "pt2", 00:09:57.354 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:57.354 "is_configured": true, 00:09:57.354 "data_offset": 2048, 00:09:57.354 "data_size": 63488 00:09:57.354 }, 00:09:57.354 { 00:09:57.354 "name": null, 00:09:57.354 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:57.354 "is_configured": false, 00:09:57.354 "data_offset": 2048, 00:09:57.354 "data_size": 63488 00:09:57.354 } 00:09:57.354 ] 00:09:57.354 }' 00:09:57.354 15:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:57.354 15:17:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.613 15:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:09:57.613 15:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:09:57.613 15:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:09:57.613 15:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:57.613 15:17:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.613 15:17:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.613 [2024-11-20 15:17:44.068420] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:57.613 [2024-11-20 15:17:44.068490] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:57.613 [2024-11-20 15:17:44.068512] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:09:57.613 [2024-11-20 15:17:44.068527] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:57.613 [2024-11-20 15:17:44.068988] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:57.613 [2024-11-20 15:17:44.069011] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:57.613 [2024-11-20 15:17:44.069101] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:57.613 [2024-11-20 15:17:44.069131] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:57.613 [2024-11-20 15:17:44.069248] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:57.613 [2024-11-20 15:17:44.069261] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:57.613 [2024-11-20 15:17:44.069521] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:09:57.613 [2024-11-20 15:17:44.069692] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:57.613 [2024-11-20 15:17:44.069704] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:09:57.613 [2024-11-20 15:17:44.069843] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:57.613 pt3 00:09:57.613 15:17:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.613 15:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:57.613 15:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:57.613 15:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:57.613 15:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:57.613 15:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:57.613 15:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:57.613 15:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:57.613 15:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:57.613 15:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:57.613 15:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:57.613 15:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:57.613 15:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:57.613 15:17:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.613 15:17:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.872 15:17:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.872 15:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:57.872 "name": "raid_bdev1", 00:09:57.872 "uuid": "3011bf8c-cb91-432f-8e7d-f91d573fe06b", 00:09:57.872 "strip_size_kb": 0, 00:09:57.872 "state": "online", 00:09:57.872 "raid_level": "raid1", 00:09:57.872 "superblock": true, 00:09:57.872 "num_base_bdevs": 3, 00:09:57.872 "num_base_bdevs_discovered": 2, 00:09:57.872 "num_base_bdevs_operational": 2, 00:09:57.872 "base_bdevs_list": [ 00:09:57.872 { 00:09:57.872 "name": null, 00:09:57.872 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:57.872 "is_configured": false, 00:09:57.872 "data_offset": 2048, 00:09:57.872 "data_size": 63488 00:09:57.872 }, 00:09:57.872 { 00:09:57.872 "name": "pt2", 00:09:57.872 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:57.872 "is_configured": true, 00:09:57.872 "data_offset": 2048, 00:09:57.872 "data_size": 63488 00:09:57.872 }, 00:09:57.872 { 00:09:57.872 "name": "pt3", 00:09:57.872 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:57.872 "is_configured": true, 00:09:57.872 "data_offset": 2048, 00:09:57.872 "data_size": 63488 00:09:57.872 } 00:09:57.872 ] 00:09:57.872 }' 00:09:57.872 15:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:57.872 15:17:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.130 15:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:58.130 15:17:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.130 15:17:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.130 [2024-11-20 15:17:44.483790] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:58.130 [2024-11-20 15:17:44.483964] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:58.130 [2024-11-20 15:17:44.484131] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:58.130 [2024-11-20 15:17:44.484229] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:58.130 [2024-11-20 15:17:44.484443] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:09:58.130 15:17:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.130 15:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:09:58.130 15:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:58.130 15:17:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.130 15:17:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.130 15:17:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.130 15:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:09:58.130 15:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:09:58.130 15:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:09:58.130 15:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:09:58.130 15:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:09:58.130 15:17:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.130 15:17:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.130 15:17:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.130 15:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:58.130 15:17:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.130 15:17:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.130 [2024-11-20 15:17:44.547801] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:58.130 [2024-11-20 15:17:44.547859] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:58.130 [2024-11-20 15:17:44.547881] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:09:58.130 [2024-11-20 15:17:44.547892] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:58.130 [2024-11-20 15:17:44.550345] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:58.130 [2024-11-20 15:17:44.550385] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:58.131 [2024-11-20 15:17:44.550466] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:58.131 [2024-11-20 15:17:44.550511] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:58.131 [2024-11-20 15:17:44.550629] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:09:58.131 [2024-11-20 15:17:44.550641] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:58.131 [2024-11-20 15:17:44.550679] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:09:58.131 [2024-11-20 15:17:44.550735] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:58.131 pt1 00:09:58.131 15:17:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.131 15:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:09:58.131 15:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:09:58.131 15:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:58.131 15:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:58.131 15:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:58.131 15:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:58.131 15:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:58.131 15:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:58.131 15:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:58.131 15:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:58.131 15:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:58.131 15:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:58.131 15:17:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.131 15:17:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.131 15:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:58.131 15:17:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.131 15:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:58.131 "name": "raid_bdev1", 00:09:58.131 "uuid": "3011bf8c-cb91-432f-8e7d-f91d573fe06b", 00:09:58.131 "strip_size_kb": 0, 00:09:58.131 "state": "configuring", 00:09:58.131 "raid_level": "raid1", 00:09:58.131 "superblock": true, 00:09:58.131 "num_base_bdevs": 3, 00:09:58.131 "num_base_bdevs_discovered": 1, 00:09:58.131 "num_base_bdevs_operational": 2, 00:09:58.131 "base_bdevs_list": [ 00:09:58.131 { 00:09:58.131 "name": null, 00:09:58.131 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:58.131 "is_configured": false, 00:09:58.131 "data_offset": 2048, 00:09:58.131 "data_size": 63488 00:09:58.131 }, 00:09:58.131 { 00:09:58.131 "name": "pt2", 00:09:58.131 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:58.131 "is_configured": true, 00:09:58.131 "data_offset": 2048, 00:09:58.131 "data_size": 63488 00:09:58.131 }, 00:09:58.131 { 00:09:58.131 "name": null, 00:09:58.131 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:58.131 "is_configured": false, 00:09:58.131 "data_offset": 2048, 00:09:58.131 "data_size": 63488 00:09:58.131 } 00:09:58.131 ] 00:09:58.131 }' 00:09:58.131 15:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:58.131 15:17:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.698 15:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:09:58.698 15:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:09:58.698 15:17:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.698 15:17:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.698 15:17:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.698 15:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:09:58.698 15:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:58.698 15:17:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.698 15:17:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.698 [2024-11-20 15:17:44.939639] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:58.698 [2024-11-20 15:17:44.939723] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:58.698 [2024-11-20 15:17:44.939749] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:09:58.698 [2024-11-20 15:17:44.939761] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:58.698 [2024-11-20 15:17:44.940224] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:58.698 [2024-11-20 15:17:44.940243] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:58.698 [2024-11-20 15:17:44.940322] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:58.698 [2024-11-20 15:17:44.940344] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:58.698 [2024-11-20 15:17:44.940461] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:09:58.698 [2024-11-20 15:17:44.940470] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:58.698 [2024-11-20 15:17:44.940738] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:09:58.698 [2024-11-20 15:17:44.940881] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:09:58.698 [2024-11-20 15:17:44.940897] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:09:58.698 [2024-11-20 15:17:44.941030] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:58.698 pt3 00:09:58.698 15:17:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.698 15:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:58.698 15:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:58.698 15:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:58.698 15:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:58.698 15:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:58.698 15:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:58.698 15:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:58.698 15:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:58.698 15:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:58.698 15:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:58.698 15:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:58.698 15:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:58.698 15:17:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.698 15:17:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.698 15:17:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.698 15:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:58.698 "name": "raid_bdev1", 00:09:58.698 "uuid": "3011bf8c-cb91-432f-8e7d-f91d573fe06b", 00:09:58.698 "strip_size_kb": 0, 00:09:58.699 "state": "online", 00:09:58.699 "raid_level": "raid1", 00:09:58.699 "superblock": true, 00:09:58.699 "num_base_bdevs": 3, 00:09:58.699 "num_base_bdevs_discovered": 2, 00:09:58.699 "num_base_bdevs_operational": 2, 00:09:58.699 "base_bdevs_list": [ 00:09:58.699 { 00:09:58.699 "name": null, 00:09:58.699 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:58.699 "is_configured": false, 00:09:58.699 "data_offset": 2048, 00:09:58.699 "data_size": 63488 00:09:58.699 }, 00:09:58.699 { 00:09:58.699 "name": "pt2", 00:09:58.699 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:58.699 "is_configured": true, 00:09:58.699 "data_offset": 2048, 00:09:58.699 "data_size": 63488 00:09:58.699 }, 00:09:58.699 { 00:09:58.699 "name": "pt3", 00:09:58.699 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:58.699 "is_configured": true, 00:09:58.699 "data_offset": 2048, 00:09:58.699 "data_size": 63488 00:09:58.699 } 00:09:58.699 ] 00:09:58.699 }' 00:09:58.699 15:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:58.699 15:17:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.958 15:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:09:58.958 15:17:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.958 15:17:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.958 15:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:09:58.958 15:17:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.958 15:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:09:58.958 15:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:09:58.958 15:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:58.958 15:17:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.958 15:17:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.958 [2024-11-20 15:17:45.351305] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:58.958 15:17:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.958 15:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 3011bf8c-cb91-432f-8e7d-f91d573fe06b '!=' 3011bf8c-cb91-432f-8e7d-f91d573fe06b ']' 00:09:58.958 15:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 68494 00:09:58.958 15:17:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 68494 ']' 00:09:58.958 15:17:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 68494 00:09:58.958 15:17:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:09:58.959 15:17:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:58.959 15:17:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68494 00:09:58.959 killing process with pid 68494 00:09:58.959 15:17:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:58.959 15:17:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:58.959 15:17:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68494' 00:09:58.959 15:17:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 68494 00:09:58.959 [2024-11-20 15:17:45.427817] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:58.959 [2024-11-20 15:17:45.427911] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:58.959 15:17:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 68494 00:09:58.959 [2024-11-20 15:17:45.427971] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:58.959 [2024-11-20 15:17:45.427986] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:09:59.527 [2024-11-20 15:17:45.736448] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:00.498 15:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:10:00.498 00:10:00.498 real 0m7.305s 00:10:00.498 user 0m11.333s 00:10:00.498 sys 0m1.456s 00:10:00.498 15:17:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:00.498 15:17:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.498 ************************************ 00:10:00.498 END TEST raid_superblock_test 00:10:00.498 ************************************ 00:10:00.498 15:17:46 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 3 read 00:10:00.498 15:17:46 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:00.498 15:17:46 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:00.498 15:17:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:00.498 ************************************ 00:10:00.498 START TEST raid_read_error_test 00:10:00.498 ************************************ 00:10:00.498 15:17:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 3 read 00:10:00.498 15:17:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:10:00.498 15:17:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:10:00.498 15:17:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:10:00.498 15:17:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:00.498 15:17:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:00.498 15:17:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:00.498 15:17:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:00.498 15:17:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:00.498 15:17:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:00.498 15:17:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:00.498 15:17:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:00.498 15:17:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:00.498 15:17:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:00.498 15:17:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:00.498 15:17:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:00.498 15:17:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:00.498 15:17:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:00.498 15:17:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:00.498 15:17:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:00.498 15:17:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:00.498 15:17:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:00.498 15:17:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:10:00.498 15:17:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:10:00.498 15:17:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:00.498 15:17:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.40qh2K8oBq 00:10:00.498 15:17:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=68940 00:10:00.498 15:17:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 68940 00:10:00.498 15:17:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:00.498 15:17:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 68940 ']' 00:10:00.498 15:17:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:00.498 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:00.498 15:17:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:00.498 15:17:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:00.498 15:17:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:00.498 15:17:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.758 [2024-11-20 15:17:47.056794] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:10:00.758 [2024-11-20 15:17:47.057133] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68940 ] 00:10:00.758 [2024-11-20 15:17:47.238705] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:01.018 [2024-11-20 15:17:47.357067] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:01.277 [2024-11-20 15:17:47.550137] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:01.277 [2024-11-20 15:17:47.550186] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:01.537 15:17:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:01.537 15:17:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:10:01.537 15:17:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:01.537 15:17:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:01.537 15:17:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.537 15:17:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.538 BaseBdev1_malloc 00:10:01.538 15:17:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.538 15:17:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:01.538 15:17:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.538 15:17:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.538 true 00:10:01.538 15:17:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.538 15:17:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:01.538 15:17:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.538 15:17:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.538 [2024-11-20 15:17:48.000773] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:01.538 [2024-11-20 15:17:48.000832] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:01.538 [2024-11-20 15:17:48.000854] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:01.538 [2024-11-20 15:17:48.000868] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:01.538 [2024-11-20 15:17:48.003228] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:01.538 [2024-11-20 15:17:48.003400] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:01.538 BaseBdev1 00:10:01.538 15:17:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.538 15:17:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:01.538 15:17:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:01.538 15:17:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.538 15:17:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.798 BaseBdev2_malloc 00:10:01.798 15:17:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.798 15:17:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:01.798 15:17:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.798 15:17:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.798 true 00:10:01.798 15:17:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.798 15:17:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:01.798 15:17:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.798 15:17:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.798 [2024-11-20 15:17:48.065593] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:01.798 [2024-11-20 15:17:48.065670] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:01.798 [2024-11-20 15:17:48.065689] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:01.798 [2024-11-20 15:17:48.065703] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:01.798 [2024-11-20 15:17:48.068034] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:01.798 [2024-11-20 15:17:48.068077] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:01.798 BaseBdev2 00:10:01.798 15:17:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.798 15:17:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:01.798 15:17:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:01.798 15:17:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.798 15:17:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.798 BaseBdev3_malloc 00:10:01.798 15:17:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.798 15:17:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:01.798 15:17:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.798 15:17:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.798 true 00:10:01.798 15:17:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.798 15:17:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:01.798 15:17:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.798 15:17:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.798 [2024-11-20 15:17:48.147987] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:01.798 [2024-11-20 15:17:48.148175] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:01.798 [2024-11-20 15:17:48.148205] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:01.798 [2024-11-20 15:17:48.148221] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:01.798 [2024-11-20 15:17:48.150747] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:01.798 [2024-11-20 15:17:48.150789] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:01.798 BaseBdev3 00:10:01.798 15:17:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.798 15:17:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:10:01.798 15:17:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.798 15:17:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.798 [2024-11-20 15:17:48.160045] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:01.798 [2024-11-20 15:17:48.162074] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:01.798 [2024-11-20 15:17:48.162266] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:01.798 [2024-11-20 15:17:48.162470] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:01.798 [2024-11-20 15:17:48.162484] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:01.798 [2024-11-20 15:17:48.162755] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:10:01.799 [2024-11-20 15:17:48.162935] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:01.799 [2024-11-20 15:17:48.162949] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:10:01.799 [2024-11-20 15:17:48.163104] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:01.799 15:17:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.799 15:17:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:01.799 15:17:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:01.799 15:17:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:01.799 15:17:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:01.799 15:17:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:01.799 15:17:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:01.799 15:17:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:01.799 15:17:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:01.799 15:17:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:01.799 15:17:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:01.799 15:17:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:01.799 15:17:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.799 15:17:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.799 15:17:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:01.799 15:17:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.799 15:17:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:01.799 "name": "raid_bdev1", 00:10:01.799 "uuid": "d0363ee4-fec6-4b4e-88f0-3b5231bf06f5", 00:10:01.799 "strip_size_kb": 0, 00:10:01.799 "state": "online", 00:10:01.799 "raid_level": "raid1", 00:10:01.799 "superblock": true, 00:10:01.799 "num_base_bdevs": 3, 00:10:01.799 "num_base_bdevs_discovered": 3, 00:10:01.799 "num_base_bdevs_operational": 3, 00:10:01.799 "base_bdevs_list": [ 00:10:01.799 { 00:10:01.799 "name": "BaseBdev1", 00:10:01.799 "uuid": "7b763ad4-d6e0-5d21-b030-1560ed9e0ebe", 00:10:01.799 "is_configured": true, 00:10:01.799 "data_offset": 2048, 00:10:01.799 "data_size": 63488 00:10:01.799 }, 00:10:01.799 { 00:10:01.799 "name": "BaseBdev2", 00:10:01.799 "uuid": "19de8ce7-ca35-5730-842b-ef423c2ae39c", 00:10:01.799 "is_configured": true, 00:10:01.799 "data_offset": 2048, 00:10:01.799 "data_size": 63488 00:10:01.799 }, 00:10:01.799 { 00:10:01.799 "name": "BaseBdev3", 00:10:01.799 "uuid": "6ca896f0-76d5-5ced-a966-7fa5e427d361", 00:10:01.799 "is_configured": true, 00:10:01.799 "data_offset": 2048, 00:10:01.799 "data_size": 63488 00:10:01.799 } 00:10:01.799 ] 00:10:01.799 }' 00:10:01.799 15:17:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:01.799 15:17:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.367 15:17:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:02.367 15:17:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:02.367 [2024-11-20 15:17:48.660666] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:10:03.313 15:17:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:10:03.313 15:17:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.313 15:17:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.313 15:17:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.313 15:17:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:03.313 15:17:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:10:03.313 15:17:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:10:03.313 15:17:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:10:03.313 15:17:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:03.313 15:17:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:03.313 15:17:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:03.313 15:17:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:03.313 15:17:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:03.313 15:17:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:03.313 15:17:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:03.313 15:17:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:03.313 15:17:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:03.313 15:17:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:03.313 15:17:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:03.313 15:17:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:03.313 15:17:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.313 15:17:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.313 15:17:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.313 15:17:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:03.313 "name": "raid_bdev1", 00:10:03.313 "uuid": "d0363ee4-fec6-4b4e-88f0-3b5231bf06f5", 00:10:03.313 "strip_size_kb": 0, 00:10:03.313 "state": "online", 00:10:03.313 "raid_level": "raid1", 00:10:03.313 "superblock": true, 00:10:03.313 "num_base_bdevs": 3, 00:10:03.313 "num_base_bdevs_discovered": 3, 00:10:03.313 "num_base_bdevs_operational": 3, 00:10:03.313 "base_bdevs_list": [ 00:10:03.313 { 00:10:03.313 "name": "BaseBdev1", 00:10:03.313 "uuid": "7b763ad4-d6e0-5d21-b030-1560ed9e0ebe", 00:10:03.313 "is_configured": true, 00:10:03.313 "data_offset": 2048, 00:10:03.313 "data_size": 63488 00:10:03.313 }, 00:10:03.313 { 00:10:03.313 "name": "BaseBdev2", 00:10:03.313 "uuid": "19de8ce7-ca35-5730-842b-ef423c2ae39c", 00:10:03.313 "is_configured": true, 00:10:03.313 "data_offset": 2048, 00:10:03.313 "data_size": 63488 00:10:03.313 }, 00:10:03.313 { 00:10:03.313 "name": "BaseBdev3", 00:10:03.313 "uuid": "6ca896f0-76d5-5ced-a966-7fa5e427d361", 00:10:03.313 "is_configured": true, 00:10:03.313 "data_offset": 2048, 00:10:03.313 "data_size": 63488 00:10:03.313 } 00:10:03.313 ] 00:10:03.313 }' 00:10:03.313 15:17:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:03.313 15:17:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.572 15:17:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:03.572 15:17:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.572 15:17:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.572 [2024-11-20 15:17:50.031714] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:03.572 [2024-11-20 15:17:50.031743] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:03.572 [2024-11-20 15:17:50.034641] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:03.572 [2024-11-20 15:17:50.034829] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:03.572 [2024-11-20 15:17:50.035010] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:03.572 [2024-11-20 15:17:50.035025] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:10:03.572 { 00:10:03.572 "results": [ 00:10:03.572 { 00:10:03.572 "job": "raid_bdev1", 00:10:03.572 "core_mask": "0x1", 00:10:03.572 "workload": "randrw", 00:10:03.572 "percentage": 50, 00:10:03.572 "status": "finished", 00:10:03.572 "queue_depth": 1, 00:10:03.572 "io_size": 131072, 00:10:03.572 "runtime": 1.371185, 00:10:03.572 "iops": 14108.964144152686, 00:10:03.572 "mibps": 1763.6205180190857, 00:10:03.572 "io_failed": 0, 00:10:03.572 "io_timeout": 0, 00:10:03.572 "avg_latency_us": 68.21562773371996, 00:10:03.572 "min_latency_us": 23.955020080321287, 00:10:03.572 "max_latency_us": 1381.7831325301204 00:10:03.572 } 00:10:03.572 ], 00:10:03.572 "core_count": 1 00:10:03.572 } 00:10:03.572 15:17:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.572 15:17:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 68940 00:10:03.572 15:17:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 68940 ']' 00:10:03.572 15:17:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 68940 00:10:03.572 15:17:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:10:03.572 15:17:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:03.572 15:17:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68940 00:10:03.831 killing process with pid 68940 00:10:03.831 15:17:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:03.831 15:17:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:03.831 15:17:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68940' 00:10:03.831 15:17:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 68940 00:10:03.831 [2024-11-20 15:17:50.071854] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:03.831 15:17:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 68940 00:10:03.831 [2024-11-20 15:17:50.304672] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:05.209 15:17:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.40qh2K8oBq 00:10:05.209 15:17:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:05.209 15:17:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:05.209 ************************************ 00:10:05.209 END TEST raid_read_error_test 00:10:05.209 ************************************ 00:10:05.209 15:17:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:10:05.209 15:17:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:10:05.209 15:17:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:05.209 15:17:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:05.209 15:17:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:10:05.209 00:10:05.209 real 0m4.562s 00:10:05.209 user 0m5.374s 00:10:05.209 sys 0m0.619s 00:10:05.209 15:17:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:05.209 15:17:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.209 15:17:51 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 3 write 00:10:05.209 15:17:51 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:05.209 15:17:51 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:05.209 15:17:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:05.209 ************************************ 00:10:05.209 START TEST raid_write_error_test 00:10:05.209 ************************************ 00:10:05.209 15:17:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 3 write 00:10:05.209 15:17:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:10:05.209 15:17:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:10:05.209 15:17:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:10:05.209 15:17:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:05.209 15:17:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:05.209 15:17:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:05.209 15:17:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:05.209 15:17:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:05.209 15:17:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:05.209 15:17:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:05.209 15:17:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:05.209 15:17:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:05.209 15:17:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:05.209 15:17:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:05.209 15:17:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:05.209 15:17:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:05.209 15:17:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:05.209 15:17:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:05.209 15:17:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:05.209 15:17:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:05.209 15:17:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:05.209 15:17:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:10:05.209 15:17:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:10:05.209 15:17:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:05.209 15:17:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.pK3y9bHFdW 00:10:05.209 15:17:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=69084 00:10:05.209 15:17:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:05.209 15:17:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 69084 00:10:05.209 15:17:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 69084 ']' 00:10:05.209 15:17:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:05.209 15:17:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:05.209 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:05.209 15:17:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:05.209 15:17:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:05.209 15:17:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.209 [2024-11-20 15:17:51.685228] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:10:05.209 [2024-11-20 15:17:51.685524] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69084 ] 00:10:05.468 [2024-11-20 15:17:51.855802] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:05.727 [2024-11-20 15:17:51.975495] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:05.727 [2024-11-20 15:17:52.170750] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:05.727 [2024-11-20 15:17:52.170795] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:06.295 15:17:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:06.295 15:17:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:10:06.295 15:17:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:06.295 15:17:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:06.295 15:17:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.295 15:17:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.295 BaseBdev1_malloc 00:10:06.295 15:17:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.295 15:17:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:06.295 15:17:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.295 15:17:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.295 true 00:10:06.295 15:17:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.295 15:17:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:06.295 15:17:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.295 15:17:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.296 [2024-11-20 15:17:52.582056] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:06.296 [2024-11-20 15:17:52.582260] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:06.296 [2024-11-20 15:17:52.582323] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:06.296 [2024-11-20 15:17:52.582416] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:06.296 [2024-11-20 15:17:52.584916] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:06.296 [2024-11-20 15:17:52.585105] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:06.296 BaseBdev1 00:10:06.296 15:17:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.296 15:17:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:06.296 15:17:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:06.296 15:17:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.296 15:17:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.296 BaseBdev2_malloc 00:10:06.296 15:17:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.296 15:17:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:06.296 15:17:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.296 15:17:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.296 true 00:10:06.296 15:17:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.296 15:17:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:06.296 15:17:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.296 15:17:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.296 [2024-11-20 15:17:52.649782] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:06.296 [2024-11-20 15:17:52.649950] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:06.296 [2024-11-20 15:17:52.649977] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:06.296 [2024-11-20 15:17:52.649992] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:06.296 [2024-11-20 15:17:52.652336] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:06.296 [2024-11-20 15:17:52.652379] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:06.296 BaseBdev2 00:10:06.296 15:17:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.296 15:17:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:06.296 15:17:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:06.296 15:17:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.296 15:17:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.296 BaseBdev3_malloc 00:10:06.296 15:17:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.296 15:17:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:06.296 15:17:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.296 15:17:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.296 true 00:10:06.296 15:17:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.296 15:17:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:06.296 15:17:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.296 15:17:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.296 [2024-11-20 15:17:52.726517] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:06.296 [2024-11-20 15:17:52.726576] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:06.296 [2024-11-20 15:17:52.726597] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:06.296 [2024-11-20 15:17:52.726611] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:06.296 [2024-11-20 15:17:52.729006] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:06.296 [2024-11-20 15:17:52.729157] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:06.296 BaseBdev3 00:10:06.296 15:17:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.296 15:17:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:10:06.296 15:17:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.296 15:17:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.296 [2024-11-20 15:17:52.738573] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:06.296 [2024-11-20 15:17:52.740630] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:06.296 [2024-11-20 15:17:52.740826] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:06.296 [2024-11-20 15:17:52.741051] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:06.296 [2024-11-20 15:17:52.741066] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:06.296 [2024-11-20 15:17:52.741329] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:10:06.296 [2024-11-20 15:17:52.741500] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:06.296 [2024-11-20 15:17:52.741514] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:10:06.296 [2024-11-20 15:17:52.741690] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:06.296 15:17:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.296 15:17:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:06.296 15:17:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:06.296 15:17:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:06.296 15:17:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:06.296 15:17:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:06.296 15:17:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:06.296 15:17:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:06.296 15:17:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:06.296 15:17:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:06.296 15:17:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:06.296 15:17:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:06.296 15:17:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.296 15:17:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:06.296 15:17:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.296 15:17:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.555 15:17:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:06.555 "name": "raid_bdev1", 00:10:06.555 "uuid": "35da6361-5802-4ad9-9442-544da93f5622", 00:10:06.555 "strip_size_kb": 0, 00:10:06.555 "state": "online", 00:10:06.555 "raid_level": "raid1", 00:10:06.555 "superblock": true, 00:10:06.555 "num_base_bdevs": 3, 00:10:06.555 "num_base_bdevs_discovered": 3, 00:10:06.555 "num_base_bdevs_operational": 3, 00:10:06.555 "base_bdevs_list": [ 00:10:06.555 { 00:10:06.555 "name": "BaseBdev1", 00:10:06.555 "uuid": "facdf156-e4f1-538a-aeae-0cf77ed7303c", 00:10:06.555 "is_configured": true, 00:10:06.555 "data_offset": 2048, 00:10:06.555 "data_size": 63488 00:10:06.555 }, 00:10:06.555 { 00:10:06.555 "name": "BaseBdev2", 00:10:06.555 "uuid": "da7dd1ba-0d8d-5ef7-ab0d-f8b3594f3c57", 00:10:06.555 "is_configured": true, 00:10:06.555 "data_offset": 2048, 00:10:06.555 "data_size": 63488 00:10:06.555 }, 00:10:06.555 { 00:10:06.555 "name": "BaseBdev3", 00:10:06.555 "uuid": "b5c1fc1d-2750-5d21-8298-bb074c2266cd", 00:10:06.555 "is_configured": true, 00:10:06.555 "data_offset": 2048, 00:10:06.555 "data_size": 63488 00:10:06.555 } 00:10:06.555 ] 00:10:06.555 }' 00:10:06.555 15:17:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:06.555 15:17:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.814 15:17:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:06.814 15:17:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:06.814 [2024-11-20 15:17:53.279037] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:10:07.750 15:17:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:10:07.750 15:17:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.750 15:17:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.750 [2024-11-20 15:17:54.198501] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:10:07.750 [2024-11-20 15:17:54.198731] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:07.750 [2024-11-20 15:17:54.199007] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006700 00:10:07.750 15:17:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.750 15:17:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:07.750 15:17:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:10:07.750 15:17:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:10:07.750 15:17:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:10:07.750 15:17:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:07.750 15:17:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:07.750 15:17:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:07.750 15:17:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:07.750 15:17:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:07.750 15:17:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:07.750 15:17:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:07.750 15:17:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:07.750 15:17:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:07.750 15:17:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:07.750 15:17:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:07.750 15:17:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.750 15:17:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.750 15:17:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:08.009 15:17:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.009 15:17:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:08.009 "name": "raid_bdev1", 00:10:08.009 "uuid": "35da6361-5802-4ad9-9442-544da93f5622", 00:10:08.009 "strip_size_kb": 0, 00:10:08.009 "state": "online", 00:10:08.009 "raid_level": "raid1", 00:10:08.009 "superblock": true, 00:10:08.009 "num_base_bdevs": 3, 00:10:08.009 "num_base_bdevs_discovered": 2, 00:10:08.009 "num_base_bdevs_operational": 2, 00:10:08.009 "base_bdevs_list": [ 00:10:08.009 { 00:10:08.009 "name": null, 00:10:08.009 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:08.009 "is_configured": false, 00:10:08.009 "data_offset": 0, 00:10:08.009 "data_size": 63488 00:10:08.009 }, 00:10:08.009 { 00:10:08.009 "name": "BaseBdev2", 00:10:08.009 "uuid": "da7dd1ba-0d8d-5ef7-ab0d-f8b3594f3c57", 00:10:08.009 "is_configured": true, 00:10:08.009 "data_offset": 2048, 00:10:08.009 "data_size": 63488 00:10:08.009 }, 00:10:08.009 { 00:10:08.009 "name": "BaseBdev3", 00:10:08.009 "uuid": "b5c1fc1d-2750-5d21-8298-bb074c2266cd", 00:10:08.009 "is_configured": true, 00:10:08.009 "data_offset": 2048, 00:10:08.009 "data_size": 63488 00:10:08.009 } 00:10:08.009 ] 00:10:08.009 }' 00:10:08.009 15:17:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:08.009 15:17:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.268 15:17:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:08.268 15:17:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.268 15:17:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.268 [2024-11-20 15:17:54.633098] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:08.268 [2024-11-20 15:17:54.633136] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:08.268 [2024-11-20 15:17:54.636073] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:08.268 [2024-11-20 15:17:54.636234] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:08.268 [2024-11-20 15:17:54.636408] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:08.268 [2024-11-20 15:17:54.636523] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:10:08.268 { 00:10:08.268 "results": [ 00:10:08.268 { 00:10:08.268 "job": "raid_bdev1", 00:10:08.268 "core_mask": "0x1", 00:10:08.268 "workload": "randrw", 00:10:08.268 "percentage": 50, 00:10:08.268 "status": "finished", 00:10:08.268 "queue_depth": 1, 00:10:08.268 "io_size": 131072, 00:10:08.268 "runtime": 1.354112, 00:10:08.268 "iops": 15023.129549106721, 00:10:08.268 "mibps": 1877.8911936383402, 00:10:08.268 "io_failed": 0, 00:10:08.268 "io_timeout": 0, 00:10:08.268 "avg_latency_us": 63.840519508106645, 00:10:08.268 "min_latency_us": 24.366265060240963, 00:10:08.268 "max_latency_us": 1651.5598393574296 00:10:08.268 } 00:10:08.268 ], 00:10:08.268 "core_count": 1 00:10:08.268 } 00:10:08.268 15:17:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.268 15:17:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 69084 00:10:08.268 15:17:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 69084 ']' 00:10:08.268 15:17:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 69084 00:10:08.268 15:17:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:10:08.268 15:17:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:08.268 15:17:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69084 00:10:08.268 15:17:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:08.268 killing process with pid 69084 00:10:08.268 15:17:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:08.268 15:17:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69084' 00:10:08.268 15:17:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 69084 00:10:08.268 [2024-11-20 15:17:54.678554] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:08.268 15:17:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 69084 00:10:08.527 [2024-11-20 15:17:54.914230] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:09.903 15:17:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:09.903 15:17:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.pK3y9bHFdW 00:10:09.903 15:17:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:09.903 ************************************ 00:10:09.903 END TEST raid_write_error_test 00:10:09.903 ************************************ 00:10:09.903 15:17:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:10:09.903 15:17:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:10:09.903 15:17:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:09.903 15:17:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:09.903 15:17:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:10:09.903 00:10:09.903 real 0m4.536s 00:10:09.903 user 0m5.341s 00:10:09.904 sys 0m0.608s 00:10:09.904 15:17:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:09.904 15:17:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.904 15:17:56 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:10:09.904 15:17:56 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:10:09.904 15:17:56 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:10:09.904 15:17:56 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:09.904 15:17:56 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:09.904 15:17:56 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:09.904 ************************************ 00:10:09.904 START TEST raid_state_function_test 00:10:09.904 ************************************ 00:10:09.904 15:17:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 4 false 00:10:09.904 15:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:10:09.904 15:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:10:09.904 15:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:10:09.904 15:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:09.904 15:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:09.904 15:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:09.904 15:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:09.904 15:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:09.904 15:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:09.904 15:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:09.904 15:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:09.904 15:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:09.904 15:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:09.904 15:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:09.904 15:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:09.904 15:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:10:09.904 15:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:09.904 15:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:09.904 15:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:09.904 15:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:09.904 15:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:09.904 15:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:09.904 15:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:09.904 15:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:09.904 15:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:10:09.904 15:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:09.904 15:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:09.904 15:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:10:09.904 15:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:10:09.904 15:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=69228 00:10:09.904 Process raid pid: 69228 00:10:09.904 15:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 69228' 00:10:09.904 15:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 69228 00:10:09.904 15:17:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 69228 ']' 00:10:09.904 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:09.904 15:17:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:09.904 15:17:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:09.904 15:17:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:09.904 15:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:09.904 15:17:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:09.904 15:17:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.904 [2024-11-20 15:17:56.274995] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:10:09.904 [2024-11-20 15:17:56.275146] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:10.163 [2024-11-20 15:17:56.461715] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:10.163 [2024-11-20 15:17:56.581055] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:10.422 [2024-11-20 15:17:56.800386] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:10.422 [2024-11-20 15:17:56.800436] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:10.680 15:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:10.680 15:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:10:10.680 15:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:10.680 15:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.680 15:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.680 [2024-11-20 15:17:57.146280] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:10.680 [2024-11-20 15:17:57.146337] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:10.680 [2024-11-20 15:17:57.146349] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:10.680 [2024-11-20 15:17:57.146362] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:10.680 [2024-11-20 15:17:57.146370] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:10.680 [2024-11-20 15:17:57.146382] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:10.680 [2024-11-20 15:17:57.146390] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:10.680 [2024-11-20 15:17:57.146401] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:10.680 15:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.680 15:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:10.680 15:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:10.680 15:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:10.680 15:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:10.680 15:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:10.680 15:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:10.680 15:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:10.680 15:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:10.680 15:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:10.680 15:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:10.680 15:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:10.680 15:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:10.681 15:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.681 15:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.939 15:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.939 15:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:10.939 "name": "Existed_Raid", 00:10:10.939 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:10.939 "strip_size_kb": 64, 00:10:10.939 "state": "configuring", 00:10:10.939 "raid_level": "raid0", 00:10:10.939 "superblock": false, 00:10:10.939 "num_base_bdevs": 4, 00:10:10.939 "num_base_bdevs_discovered": 0, 00:10:10.939 "num_base_bdevs_operational": 4, 00:10:10.939 "base_bdevs_list": [ 00:10:10.939 { 00:10:10.939 "name": "BaseBdev1", 00:10:10.940 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:10.940 "is_configured": false, 00:10:10.940 "data_offset": 0, 00:10:10.940 "data_size": 0 00:10:10.940 }, 00:10:10.940 { 00:10:10.940 "name": "BaseBdev2", 00:10:10.940 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:10.940 "is_configured": false, 00:10:10.940 "data_offset": 0, 00:10:10.940 "data_size": 0 00:10:10.940 }, 00:10:10.940 { 00:10:10.940 "name": "BaseBdev3", 00:10:10.940 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:10.940 "is_configured": false, 00:10:10.940 "data_offset": 0, 00:10:10.940 "data_size": 0 00:10:10.940 }, 00:10:10.940 { 00:10:10.940 "name": "BaseBdev4", 00:10:10.940 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:10.940 "is_configured": false, 00:10:10.940 "data_offset": 0, 00:10:10.940 "data_size": 0 00:10:10.940 } 00:10:10.940 ] 00:10:10.940 }' 00:10:10.940 15:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:10.940 15:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.199 15:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:11.199 15:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.199 15:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.199 [2024-11-20 15:17:57.557649] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:11.199 [2024-11-20 15:17:57.557705] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:11.199 15:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.199 15:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:11.199 15:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.199 15:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.199 [2024-11-20 15:17:57.565629] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:11.199 [2024-11-20 15:17:57.565684] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:11.199 [2024-11-20 15:17:57.565695] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:11.199 [2024-11-20 15:17:57.565708] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:11.199 [2024-11-20 15:17:57.565715] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:11.199 [2024-11-20 15:17:57.565727] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:11.199 [2024-11-20 15:17:57.565734] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:11.199 [2024-11-20 15:17:57.565746] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:11.199 15:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.199 15:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:11.199 15:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.199 15:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.199 [2024-11-20 15:17:57.608365] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:11.199 BaseBdev1 00:10:11.199 15:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.199 15:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:11.199 15:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:11.199 15:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:11.199 15:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:11.199 15:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:11.199 15:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:11.199 15:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:11.199 15:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.199 15:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.199 15:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.199 15:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:11.199 15:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.199 15:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.199 [ 00:10:11.199 { 00:10:11.199 "name": "BaseBdev1", 00:10:11.199 "aliases": [ 00:10:11.199 "7b8cb7d7-b399-4ea3-b3f0-f71d3dd720b4" 00:10:11.199 ], 00:10:11.199 "product_name": "Malloc disk", 00:10:11.199 "block_size": 512, 00:10:11.199 "num_blocks": 65536, 00:10:11.199 "uuid": "7b8cb7d7-b399-4ea3-b3f0-f71d3dd720b4", 00:10:11.199 "assigned_rate_limits": { 00:10:11.199 "rw_ios_per_sec": 0, 00:10:11.199 "rw_mbytes_per_sec": 0, 00:10:11.199 "r_mbytes_per_sec": 0, 00:10:11.199 "w_mbytes_per_sec": 0 00:10:11.199 }, 00:10:11.199 "claimed": true, 00:10:11.199 "claim_type": "exclusive_write", 00:10:11.199 "zoned": false, 00:10:11.199 "supported_io_types": { 00:10:11.199 "read": true, 00:10:11.199 "write": true, 00:10:11.199 "unmap": true, 00:10:11.199 "flush": true, 00:10:11.199 "reset": true, 00:10:11.199 "nvme_admin": false, 00:10:11.199 "nvme_io": false, 00:10:11.199 "nvme_io_md": false, 00:10:11.199 "write_zeroes": true, 00:10:11.199 "zcopy": true, 00:10:11.199 "get_zone_info": false, 00:10:11.199 "zone_management": false, 00:10:11.199 "zone_append": false, 00:10:11.199 "compare": false, 00:10:11.199 "compare_and_write": false, 00:10:11.199 "abort": true, 00:10:11.199 "seek_hole": false, 00:10:11.199 "seek_data": false, 00:10:11.199 "copy": true, 00:10:11.199 "nvme_iov_md": false 00:10:11.199 }, 00:10:11.199 "memory_domains": [ 00:10:11.199 { 00:10:11.199 "dma_device_id": "system", 00:10:11.199 "dma_device_type": 1 00:10:11.199 }, 00:10:11.199 { 00:10:11.199 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:11.199 "dma_device_type": 2 00:10:11.199 } 00:10:11.199 ], 00:10:11.199 "driver_specific": {} 00:10:11.199 } 00:10:11.199 ] 00:10:11.199 15:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.199 15:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:11.199 15:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:11.199 15:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:11.199 15:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:11.199 15:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:11.199 15:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:11.199 15:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:11.199 15:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:11.199 15:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:11.199 15:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:11.199 15:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:11.199 15:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:11.199 15:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:11.199 15:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.199 15:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.199 15:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.459 15:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:11.459 "name": "Existed_Raid", 00:10:11.459 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:11.459 "strip_size_kb": 64, 00:10:11.459 "state": "configuring", 00:10:11.459 "raid_level": "raid0", 00:10:11.459 "superblock": false, 00:10:11.459 "num_base_bdevs": 4, 00:10:11.459 "num_base_bdevs_discovered": 1, 00:10:11.459 "num_base_bdevs_operational": 4, 00:10:11.459 "base_bdevs_list": [ 00:10:11.459 { 00:10:11.459 "name": "BaseBdev1", 00:10:11.459 "uuid": "7b8cb7d7-b399-4ea3-b3f0-f71d3dd720b4", 00:10:11.459 "is_configured": true, 00:10:11.459 "data_offset": 0, 00:10:11.459 "data_size": 65536 00:10:11.459 }, 00:10:11.459 { 00:10:11.459 "name": "BaseBdev2", 00:10:11.459 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:11.459 "is_configured": false, 00:10:11.459 "data_offset": 0, 00:10:11.459 "data_size": 0 00:10:11.459 }, 00:10:11.459 { 00:10:11.459 "name": "BaseBdev3", 00:10:11.459 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:11.459 "is_configured": false, 00:10:11.459 "data_offset": 0, 00:10:11.459 "data_size": 0 00:10:11.459 }, 00:10:11.459 { 00:10:11.459 "name": "BaseBdev4", 00:10:11.459 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:11.459 "is_configured": false, 00:10:11.459 "data_offset": 0, 00:10:11.459 "data_size": 0 00:10:11.459 } 00:10:11.459 ] 00:10:11.459 }' 00:10:11.459 15:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:11.459 15:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.719 15:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:11.719 15:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.719 15:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.719 [2024-11-20 15:17:58.047790] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:11.719 [2024-11-20 15:17:58.047847] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:11.719 15:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.719 15:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:11.719 15:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.719 15:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.719 [2024-11-20 15:17:58.055839] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:11.719 [2024-11-20 15:17:58.057917] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:11.719 [2024-11-20 15:17:58.057963] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:11.719 [2024-11-20 15:17:58.057974] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:11.719 [2024-11-20 15:17:58.057988] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:11.719 [2024-11-20 15:17:58.057996] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:11.720 [2024-11-20 15:17:58.058008] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:11.720 15:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.720 15:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:11.720 15:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:11.720 15:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:11.720 15:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:11.720 15:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:11.720 15:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:11.720 15:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:11.720 15:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:11.720 15:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:11.720 15:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:11.720 15:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:11.720 15:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:11.720 15:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:11.720 15:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.720 15:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:11.720 15:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.720 15:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.720 15:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:11.720 "name": "Existed_Raid", 00:10:11.720 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:11.720 "strip_size_kb": 64, 00:10:11.720 "state": "configuring", 00:10:11.720 "raid_level": "raid0", 00:10:11.720 "superblock": false, 00:10:11.720 "num_base_bdevs": 4, 00:10:11.720 "num_base_bdevs_discovered": 1, 00:10:11.720 "num_base_bdevs_operational": 4, 00:10:11.720 "base_bdevs_list": [ 00:10:11.720 { 00:10:11.720 "name": "BaseBdev1", 00:10:11.720 "uuid": "7b8cb7d7-b399-4ea3-b3f0-f71d3dd720b4", 00:10:11.720 "is_configured": true, 00:10:11.720 "data_offset": 0, 00:10:11.720 "data_size": 65536 00:10:11.720 }, 00:10:11.720 { 00:10:11.720 "name": "BaseBdev2", 00:10:11.720 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:11.720 "is_configured": false, 00:10:11.720 "data_offset": 0, 00:10:11.720 "data_size": 0 00:10:11.720 }, 00:10:11.720 { 00:10:11.720 "name": "BaseBdev3", 00:10:11.720 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:11.720 "is_configured": false, 00:10:11.720 "data_offset": 0, 00:10:11.720 "data_size": 0 00:10:11.720 }, 00:10:11.720 { 00:10:11.720 "name": "BaseBdev4", 00:10:11.720 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:11.720 "is_configured": false, 00:10:11.720 "data_offset": 0, 00:10:11.720 "data_size": 0 00:10:11.720 } 00:10:11.720 ] 00:10:11.720 }' 00:10:11.720 15:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:11.720 15:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.288 15:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:12.288 15:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.288 15:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.288 [2024-11-20 15:17:58.507040] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:12.288 BaseBdev2 00:10:12.288 15:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.288 15:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:12.288 15:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:12.288 15:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:12.288 15:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:12.288 15:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:12.288 15:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:12.288 15:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:12.288 15:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.288 15:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.288 15:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.288 15:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:12.288 15:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.288 15:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.288 [ 00:10:12.288 { 00:10:12.288 "name": "BaseBdev2", 00:10:12.288 "aliases": [ 00:10:12.288 "24953ecc-edfb-4d02-b20d-824c0298821d" 00:10:12.288 ], 00:10:12.288 "product_name": "Malloc disk", 00:10:12.288 "block_size": 512, 00:10:12.288 "num_blocks": 65536, 00:10:12.288 "uuid": "24953ecc-edfb-4d02-b20d-824c0298821d", 00:10:12.288 "assigned_rate_limits": { 00:10:12.288 "rw_ios_per_sec": 0, 00:10:12.288 "rw_mbytes_per_sec": 0, 00:10:12.288 "r_mbytes_per_sec": 0, 00:10:12.288 "w_mbytes_per_sec": 0 00:10:12.288 }, 00:10:12.288 "claimed": true, 00:10:12.288 "claim_type": "exclusive_write", 00:10:12.288 "zoned": false, 00:10:12.288 "supported_io_types": { 00:10:12.288 "read": true, 00:10:12.288 "write": true, 00:10:12.288 "unmap": true, 00:10:12.288 "flush": true, 00:10:12.288 "reset": true, 00:10:12.288 "nvme_admin": false, 00:10:12.288 "nvme_io": false, 00:10:12.288 "nvme_io_md": false, 00:10:12.288 "write_zeroes": true, 00:10:12.288 "zcopy": true, 00:10:12.288 "get_zone_info": false, 00:10:12.288 "zone_management": false, 00:10:12.288 "zone_append": false, 00:10:12.288 "compare": false, 00:10:12.288 "compare_and_write": false, 00:10:12.288 "abort": true, 00:10:12.288 "seek_hole": false, 00:10:12.288 "seek_data": false, 00:10:12.288 "copy": true, 00:10:12.288 "nvme_iov_md": false 00:10:12.288 }, 00:10:12.288 "memory_domains": [ 00:10:12.288 { 00:10:12.288 "dma_device_id": "system", 00:10:12.288 "dma_device_type": 1 00:10:12.288 }, 00:10:12.288 { 00:10:12.288 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:12.288 "dma_device_type": 2 00:10:12.288 } 00:10:12.288 ], 00:10:12.288 "driver_specific": {} 00:10:12.288 } 00:10:12.288 ] 00:10:12.288 15:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.288 15:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:12.288 15:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:12.288 15:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:12.288 15:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:12.288 15:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:12.288 15:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:12.288 15:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:12.288 15:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:12.288 15:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:12.288 15:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:12.288 15:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:12.288 15:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:12.288 15:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:12.288 15:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:12.288 15:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.288 15:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:12.288 15:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.288 15:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.288 15:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:12.288 "name": "Existed_Raid", 00:10:12.288 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:12.288 "strip_size_kb": 64, 00:10:12.288 "state": "configuring", 00:10:12.288 "raid_level": "raid0", 00:10:12.289 "superblock": false, 00:10:12.289 "num_base_bdevs": 4, 00:10:12.289 "num_base_bdevs_discovered": 2, 00:10:12.289 "num_base_bdevs_operational": 4, 00:10:12.289 "base_bdevs_list": [ 00:10:12.289 { 00:10:12.289 "name": "BaseBdev1", 00:10:12.289 "uuid": "7b8cb7d7-b399-4ea3-b3f0-f71d3dd720b4", 00:10:12.289 "is_configured": true, 00:10:12.289 "data_offset": 0, 00:10:12.289 "data_size": 65536 00:10:12.289 }, 00:10:12.289 { 00:10:12.289 "name": "BaseBdev2", 00:10:12.289 "uuid": "24953ecc-edfb-4d02-b20d-824c0298821d", 00:10:12.289 "is_configured": true, 00:10:12.289 "data_offset": 0, 00:10:12.289 "data_size": 65536 00:10:12.289 }, 00:10:12.289 { 00:10:12.289 "name": "BaseBdev3", 00:10:12.289 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:12.289 "is_configured": false, 00:10:12.289 "data_offset": 0, 00:10:12.289 "data_size": 0 00:10:12.289 }, 00:10:12.289 { 00:10:12.289 "name": "BaseBdev4", 00:10:12.289 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:12.289 "is_configured": false, 00:10:12.289 "data_offset": 0, 00:10:12.289 "data_size": 0 00:10:12.289 } 00:10:12.289 ] 00:10:12.289 }' 00:10:12.289 15:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:12.289 15:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.548 15:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:12.548 15:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.548 15:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.812 [2024-11-20 15:17:59.048144] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:12.812 BaseBdev3 00:10:12.812 15:17:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.812 15:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:12.812 15:17:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:12.812 15:17:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:12.812 15:17:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:12.812 15:17:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:12.812 15:17:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:12.812 15:17:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:12.812 15:17:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.812 15:17:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.812 15:17:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.812 15:17:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:12.812 15:17:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.812 15:17:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.812 [ 00:10:12.812 { 00:10:12.812 "name": "BaseBdev3", 00:10:12.812 "aliases": [ 00:10:12.812 "8701c714-92ad-421a-9d5e-d180641ee4fa" 00:10:12.812 ], 00:10:12.813 "product_name": "Malloc disk", 00:10:12.813 "block_size": 512, 00:10:12.813 "num_blocks": 65536, 00:10:12.813 "uuid": "8701c714-92ad-421a-9d5e-d180641ee4fa", 00:10:12.813 "assigned_rate_limits": { 00:10:12.813 "rw_ios_per_sec": 0, 00:10:12.813 "rw_mbytes_per_sec": 0, 00:10:12.813 "r_mbytes_per_sec": 0, 00:10:12.813 "w_mbytes_per_sec": 0 00:10:12.813 }, 00:10:12.813 "claimed": true, 00:10:12.813 "claim_type": "exclusive_write", 00:10:12.813 "zoned": false, 00:10:12.813 "supported_io_types": { 00:10:12.813 "read": true, 00:10:12.813 "write": true, 00:10:12.813 "unmap": true, 00:10:12.813 "flush": true, 00:10:12.813 "reset": true, 00:10:12.813 "nvme_admin": false, 00:10:12.813 "nvme_io": false, 00:10:12.813 "nvme_io_md": false, 00:10:12.813 "write_zeroes": true, 00:10:12.813 "zcopy": true, 00:10:12.813 "get_zone_info": false, 00:10:12.813 "zone_management": false, 00:10:12.813 "zone_append": false, 00:10:12.813 "compare": false, 00:10:12.813 "compare_and_write": false, 00:10:12.813 "abort": true, 00:10:12.813 "seek_hole": false, 00:10:12.813 "seek_data": false, 00:10:12.813 "copy": true, 00:10:12.813 "nvme_iov_md": false 00:10:12.813 }, 00:10:12.813 "memory_domains": [ 00:10:12.813 { 00:10:12.813 "dma_device_id": "system", 00:10:12.813 "dma_device_type": 1 00:10:12.813 }, 00:10:12.813 { 00:10:12.813 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:12.813 "dma_device_type": 2 00:10:12.813 } 00:10:12.813 ], 00:10:12.813 "driver_specific": {} 00:10:12.813 } 00:10:12.813 ] 00:10:12.813 15:17:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.813 15:17:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:12.813 15:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:12.813 15:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:12.813 15:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:12.813 15:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:12.813 15:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:12.813 15:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:12.813 15:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:12.813 15:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:12.813 15:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:12.813 15:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:12.813 15:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:12.813 15:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:12.813 15:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:12.813 15:17:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.813 15:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:12.813 15:17:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.813 15:17:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.813 15:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:12.813 "name": "Existed_Raid", 00:10:12.813 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:12.813 "strip_size_kb": 64, 00:10:12.813 "state": "configuring", 00:10:12.813 "raid_level": "raid0", 00:10:12.813 "superblock": false, 00:10:12.813 "num_base_bdevs": 4, 00:10:12.813 "num_base_bdevs_discovered": 3, 00:10:12.813 "num_base_bdevs_operational": 4, 00:10:12.813 "base_bdevs_list": [ 00:10:12.813 { 00:10:12.813 "name": "BaseBdev1", 00:10:12.813 "uuid": "7b8cb7d7-b399-4ea3-b3f0-f71d3dd720b4", 00:10:12.813 "is_configured": true, 00:10:12.813 "data_offset": 0, 00:10:12.813 "data_size": 65536 00:10:12.813 }, 00:10:12.813 { 00:10:12.813 "name": "BaseBdev2", 00:10:12.813 "uuid": "24953ecc-edfb-4d02-b20d-824c0298821d", 00:10:12.813 "is_configured": true, 00:10:12.813 "data_offset": 0, 00:10:12.813 "data_size": 65536 00:10:12.813 }, 00:10:12.813 { 00:10:12.813 "name": "BaseBdev3", 00:10:12.813 "uuid": "8701c714-92ad-421a-9d5e-d180641ee4fa", 00:10:12.813 "is_configured": true, 00:10:12.813 "data_offset": 0, 00:10:12.813 "data_size": 65536 00:10:12.813 }, 00:10:12.813 { 00:10:12.813 "name": "BaseBdev4", 00:10:12.813 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:12.813 "is_configured": false, 00:10:12.813 "data_offset": 0, 00:10:12.813 "data_size": 0 00:10:12.813 } 00:10:12.813 ] 00:10:12.813 }' 00:10:12.813 15:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:12.813 15:17:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.072 15:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:13.073 15:17:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.073 15:17:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.073 [2024-11-20 15:17:59.495950] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:13.073 [2024-11-20 15:17:59.496001] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:13.073 [2024-11-20 15:17:59.496012] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:10:13.073 [2024-11-20 15:17:59.496299] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:13.073 [2024-11-20 15:17:59.496460] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:13.073 [2024-11-20 15:17:59.496482] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:13.073 [2024-11-20 15:17:59.496763] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:13.073 BaseBdev4 00:10:13.073 15:17:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.073 15:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:10:13.073 15:17:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:13.073 15:17:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:13.073 15:17:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:13.073 15:17:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:13.073 15:17:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:13.073 15:17:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:13.073 15:17:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.073 15:17:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.073 15:17:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.073 15:17:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:13.073 15:17:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.073 15:17:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.073 [ 00:10:13.073 { 00:10:13.073 "name": "BaseBdev4", 00:10:13.073 "aliases": [ 00:10:13.073 "db354b71-e6e8-470a-ad1c-c75a7140fba7" 00:10:13.073 ], 00:10:13.073 "product_name": "Malloc disk", 00:10:13.073 "block_size": 512, 00:10:13.073 "num_blocks": 65536, 00:10:13.073 "uuid": "db354b71-e6e8-470a-ad1c-c75a7140fba7", 00:10:13.073 "assigned_rate_limits": { 00:10:13.073 "rw_ios_per_sec": 0, 00:10:13.073 "rw_mbytes_per_sec": 0, 00:10:13.073 "r_mbytes_per_sec": 0, 00:10:13.073 "w_mbytes_per_sec": 0 00:10:13.073 }, 00:10:13.073 "claimed": true, 00:10:13.073 "claim_type": "exclusive_write", 00:10:13.073 "zoned": false, 00:10:13.073 "supported_io_types": { 00:10:13.073 "read": true, 00:10:13.073 "write": true, 00:10:13.073 "unmap": true, 00:10:13.073 "flush": true, 00:10:13.073 "reset": true, 00:10:13.073 "nvme_admin": false, 00:10:13.073 "nvme_io": false, 00:10:13.073 "nvme_io_md": false, 00:10:13.073 "write_zeroes": true, 00:10:13.073 "zcopy": true, 00:10:13.073 "get_zone_info": false, 00:10:13.073 "zone_management": false, 00:10:13.073 "zone_append": false, 00:10:13.073 "compare": false, 00:10:13.073 "compare_and_write": false, 00:10:13.073 "abort": true, 00:10:13.073 "seek_hole": false, 00:10:13.073 "seek_data": false, 00:10:13.073 "copy": true, 00:10:13.073 "nvme_iov_md": false 00:10:13.073 }, 00:10:13.073 "memory_domains": [ 00:10:13.073 { 00:10:13.073 "dma_device_id": "system", 00:10:13.073 "dma_device_type": 1 00:10:13.073 }, 00:10:13.073 { 00:10:13.073 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:13.073 "dma_device_type": 2 00:10:13.073 } 00:10:13.073 ], 00:10:13.073 "driver_specific": {} 00:10:13.073 } 00:10:13.073 ] 00:10:13.073 15:17:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.073 15:17:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:13.073 15:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:13.073 15:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:13.073 15:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:10:13.073 15:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:13.073 15:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:13.073 15:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:13.073 15:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:13.073 15:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:13.073 15:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:13.073 15:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:13.073 15:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:13.073 15:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:13.073 15:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:13.073 15:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:13.073 15:17:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.073 15:17:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.332 15:17:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.332 15:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:13.332 "name": "Existed_Raid", 00:10:13.332 "uuid": "07c2eab7-cac6-4e9c-a58b-3b13c8ad5702", 00:10:13.332 "strip_size_kb": 64, 00:10:13.332 "state": "online", 00:10:13.332 "raid_level": "raid0", 00:10:13.332 "superblock": false, 00:10:13.332 "num_base_bdevs": 4, 00:10:13.332 "num_base_bdevs_discovered": 4, 00:10:13.332 "num_base_bdevs_operational": 4, 00:10:13.332 "base_bdevs_list": [ 00:10:13.332 { 00:10:13.332 "name": "BaseBdev1", 00:10:13.332 "uuid": "7b8cb7d7-b399-4ea3-b3f0-f71d3dd720b4", 00:10:13.332 "is_configured": true, 00:10:13.332 "data_offset": 0, 00:10:13.332 "data_size": 65536 00:10:13.332 }, 00:10:13.332 { 00:10:13.332 "name": "BaseBdev2", 00:10:13.332 "uuid": "24953ecc-edfb-4d02-b20d-824c0298821d", 00:10:13.332 "is_configured": true, 00:10:13.332 "data_offset": 0, 00:10:13.332 "data_size": 65536 00:10:13.332 }, 00:10:13.332 { 00:10:13.332 "name": "BaseBdev3", 00:10:13.332 "uuid": "8701c714-92ad-421a-9d5e-d180641ee4fa", 00:10:13.332 "is_configured": true, 00:10:13.332 "data_offset": 0, 00:10:13.332 "data_size": 65536 00:10:13.332 }, 00:10:13.332 { 00:10:13.332 "name": "BaseBdev4", 00:10:13.332 "uuid": "db354b71-e6e8-470a-ad1c-c75a7140fba7", 00:10:13.332 "is_configured": true, 00:10:13.332 "data_offset": 0, 00:10:13.332 "data_size": 65536 00:10:13.332 } 00:10:13.332 ] 00:10:13.332 }' 00:10:13.332 15:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:13.332 15:17:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.591 15:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:13.591 15:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:13.591 15:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:13.591 15:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:13.591 15:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:13.591 15:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:13.591 15:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:13.591 15:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:13.591 15:17:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.591 15:17:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.591 [2024-11-20 15:17:59.963679] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:13.591 15:17:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.592 15:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:13.592 "name": "Existed_Raid", 00:10:13.592 "aliases": [ 00:10:13.592 "07c2eab7-cac6-4e9c-a58b-3b13c8ad5702" 00:10:13.592 ], 00:10:13.592 "product_name": "Raid Volume", 00:10:13.592 "block_size": 512, 00:10:13.592 "num_blocks": 262144, 00:10:13.592 "uuid": "07c2eab7-cac6-4e9c-a58b-3b13c8ad5702", 00:10:13.592 "assigned_rate_limits": { 00:10:13.592 "rw_ios_per_sec": 0, 00:10:13.592 "rw_mbytes_per_sec": 0, 00:10:13.592 "r_mbytes_per_sec": 0, 00:10:13.592 "w_mbytes_per_sec": 0 00:10:13.592 }, 00:10:13.592 "claimed": false, 00:10:13.592 "zoned": false, 00:10:13.592 "supported_io_types": { 00:10:13.592 "read": true, 00:10:13.592 "write": true, 00:10:13.592 "unmap": true, 00:10:13.592 "flush": true, 00:10:13.592 "reset": true, 00:10:13.592 "nvme_admin": false, 00:10:13.592 "nvme_io": false, 00:10:13.592 "nvme_io_md": false, 00:10:13.592 "write_zeroes": true, 00:10:13.592 "zcopy": false, 00:10:13.592 "get_zone_info": false, 00:10:13.592 "zone_management": false, 00:10:13.592 "zone_append": false, 00:10:13.592 "compare": false, 00:10:13.592 "compare_and_write": false, 00:10:13.592 "abort": false, 00:10:13.592 "seek_hole": false, 00:10:13.592 "seek_data": false, 00:10:13.592 "copy": false, 00:10:13.592 "nvme_iov_md": false 00:10:13.592 }, 00:10:13.592 "memory_domains": [ 00:10:13.592 { 00:10:13.592 "dma_device_id": "system", 00:10:13.592 "dma_device_type": 1 00:10:13.592 }, 00:10:13.592 { 00:10:13.592 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:13.592 "dma_device_type": 2 00:10:13.592 }, 00:10:13.592 { 00:10:13.592 "dma_device_id": "system", 00:10:13.592 "dma_device_type": 1 00:10:13.592 }, 00:10:13.592 { 00:10:13.592 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:13.592 "dma_device_type": 2 00:10:13.592 }, 00:10:13.592 { 00:10:13.592 "dma_device_id": "system", 00:10:13.592 "dma_device_type": 1 00:10:13.592 }, 00:10:13.592 { 00:10:13.592 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:13.592 "dma_device_type": 2 00:10:13.592 }, 00:10:13.592 { 00:10:13.592 "dma_device_id": "system", 00:10:13.592 "dma_device_type": 1 00:10:13.592 }, 00:10:13.592 { 00:10:13.592 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:13.592 "dma_device_type": 2 00:10:13.592 } 00:10:13.592 ], 00:10:13.592 "driver_specific": { 00:10:13.592 "raid": { 00:10:13.592 "uuid": "07c2eab7-cac6-4e9c-a58b-3b13c8ad5702", 00:10:13.592 "strip_size_kb": 64, 00:10:13.592 "state": "online", 00:10:13.592 "raid_level": "raid0", 00:10:13.592 "superblock": false, 00:10:13.592 "num_base_bdevs": 4, 00:10:13.592 "num_base_bdevs_discovered": 4, 00:10:13.592 "num_base_bdevs_operational": 4, 00:10:13.592 "base_bdevs_list": [ 00:10:13.592 { 00:10:13.592 "name": "BaseBdev1", 00:10:13.592 "uuid": "7b8cb7d7-b399-4ea3-b3f0-f71d3dd720b4", 00:10:13.592 "is_configured": true, 00:10:13.592 "data_offset": 0, 00:10:13.592 "data_size": 65536 00:10:13.592 }, 00:10:13.592 { 00:10:13.592 "name": "BaseBdev2", 00:10:13.592 "uuid": "24953ecc-edfb-4d02-b20d-824c0298821d", 00:10:13.592 "is_configured": true, 00:10:13.592 "data_offset": 0, 00:10:13.592 "data_size": 65536 00:10:13.592 }, 00:10:13.592 { 00:10:13.592 "name": "BaseBdev3", 00:10:13.592 "uuid": "8701c714-92ad-421a-9d5e-d180641ee4fa", 00:10:13.592 "is_configured": true, 00:10:13.592 "data_offset": 0, 00:10:13.592 "data_size": 65536 00:10:13.592 }, 00:10:13.592 { 00:10:13.592 "name": "BaseBdev4", 00:10:13.592 "uuid": "db354b71-e6e8-470a-ad1c-c75a7140fba7", 00:10:13.592 "is_configured": true, 00:10:13.592 "data_offset": 0, 00:10:13.592 "data_size": 65536 00:10:13.592 } 00:10:13.592 ] 00:10:13.592 } 00:10:13.592 } 00:10:13.592 }' 00:10:13.592 15:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:13.592 15:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:13.592 BaseBdev2 00:10:13.592 BaseBdev3 00:10:13.592 BaseBdev4' 00:10:13.592 15:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:13.851 15:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:13.851 15:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:13.851 15:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:13.851 15:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.851 15:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:13.851 15:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.851 15:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.851 15:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:13.851 15:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:13.851 15:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:13.851 15:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:13.851 15:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.851 15:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.851 15:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:13.851 15:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.851 15:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:13.851 15:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:13.851 15:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:13.851 15:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:13.851 15:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:13.851 15:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.851 15:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.851 15:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.851 15:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:13.851 15:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:13.851 15:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:13.851 15:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:13.851 15:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.851 15:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.851 15:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:13.851 15:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.851 15:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:13.851 15:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:13.851 15:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:13.851 15:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.851 15:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.851 [2024-11-20 15:18:00.299084] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:13.851 [2024-11-20 15:18:00.299117] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:13.851 [2024-11-20 15:18:00.299174] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:14.116 15:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.116 15:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:14.116 15:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:10:14.116 15:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:14.116 15:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:14.116 15:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:14.116 15:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:10:14.116 15:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:14.116 15:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:14.116 15:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:14.116 15:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:14.116 15:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:14.116 15:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:14.116 15:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:14.116 15:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:14.116 15:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:14.116 15:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:14.116 15:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:14.116 15:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.116 15:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.116 15:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.116 15:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:14.116 "name": "Existed_Raid", 00:10:14.116 "uuid": "07c2eab7-cac6-4e9c-a58b-3b13c8ad5702", 00:10:14.116 "strip_size_kb": 64, 00:10:14.116 "state": "offline", 00:10:14.116 "raid_level": "raid0", 00:10:14.116 "superblock": false, 00:10:14.116 "num_base_bdevs": 4, 00:10:14.116 "num_base_bdevs_discovered": 3, 00:10:14.116 "num_base_bdevs_operational": 3, 00:10:14.116 "base_bdevs_list": [ 00:10:14.116 { 00:10:14.116 "name": null, 00:10:14.116 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:14.116 "is_configured": false, 00:10:14.116 "data_offset": 0, 00:10:14.116 "data_size": 65536 00:10:14.116 }, 00:10:14.116 { 00:10:14.116 "name": "BaseBdev2", 00:10:14.116 "uuid": "24953ecc-edfb-4d02-b20d-824c0298821d", 00:10:14.116 "is_configured": true, 00:10:14.116 "data_offset": 0, 00:10:14.116 "data_size": 65536 00:10:14.116 }, 00:10:14.116 { 00:10:14.116 "name": "BaseBdev3", 00:10:14.116 "uuid": "8701c714-92ad-421a-9d5e-d180641ee4fa", 00:10:14.116 "is_configured": true, 00:10:14.116 "data_offset": 0, 00:10:14.116 "data_size": 65536 00:10:14.116 }, 00:10:14.116 { 00:10:14.116 "name": "BaseBdev4", 00:10:14.116 "uuid": "db354b71-e6e8-470a-ad1c-c75a7140fba7", 00:10:14.116 "is_configured": true, 00:10:14.116 "data_offset": 0, 00:10:14.116 "data_size": 65536 00:10:14.116 } 00:10:14.116 ] 00:10:14.116 }' 00:10:14.116 15:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:14.116 15:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.375 15:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:14.375 15:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:14.375 15:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:14.375 15:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.375 15:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:14.375 15:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.375 15:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.375 15:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:14.375 15:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:14.375 15:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:14.375 15:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.375 15:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.375 [2024-11-20 15:18:00.835241] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:14.634 15:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.634 15:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:14.634 15:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:14.634 15:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:14.634 15:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.634 15:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:14.634 15:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.634 15:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.634 15:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:14.634 15:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:14.634 15:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:14.634 15:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.634 15:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.634 [2024-11-20 15:18:00.988720] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:14.634 15:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.634 15:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:14.634 15:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:14.634 15:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:14.634 15:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:14.634 15:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.634 15:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.893 15:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.893 15:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:14.893 15:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:14.893 15:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:10:14.893 15:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.893 15:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.893 [2024-11-20 15:18:01.139538] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:10:14.893 [2024-11-20 15:18:01.139590] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:14.893 15:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.893 15:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:14.893 15:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:14.893 15:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:14.893 15:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.893 15:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.893 15:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:14.893 15:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.893 15:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:14.893 15:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:14.893 15:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:10:14.893 15:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:14.893 15:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:14.893 15:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:14.893 15:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.893 15:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.893 BaseBdev2 00:10:14.893 15:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.893 15:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:14.893 15:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:14.893 15:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:14.893 15:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:14.893 15:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:14.893 15:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:14.893 15:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:14.893 15:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.893 15:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.893 15:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.893 15:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:14.893 15:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.893 15:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.893 [ 00:10:14.893 { 00:10:14.893 "name": "BaseBdev2", 00:10:14.893 "aliases": [ 00:10:14.893 "1f476538-1cce-423d-b3c7-88fc64a2fe31" 00:10:14.893 ], 00:10:14.893 "product_name": "Malloc disk", 00:10:14.893 "block_size": 512, 00:10:14.893 "num_blocks": 65536, 00:10:14.893 "uuid": "1f476538-1cce-423d-b3c7-88fc64a2fe31", 00:10:14.893 "assigned_rate_limits": { 00:10:14.893 "rw_ios_per_sec": 0, 00:10:14.893 "rw_mbytes_per_sec": 0, 00:10:14.893 "r_mbytes_per_sec": 0, 00:10:14.893 "w_mbytes_per_sec": 0 00:10:14.893 }, 00:10:14.893 "claimed": false, 00:10:14.893 "zoned": false, 00:10:14.893 "supported_io_types": { 00:10:14.893 "read": true, 00:10:14.894 "write": true, 00:10:14.894 "unmap": true, 00:10:14.894 "flush": true, 00:10:14.894 "reset": true, 00:10:14.894 "nvme_admin": false, 00:10:14.894 "nvme_io": false, 00:10:14.894 "nvme_io_md": false, 00:10:14.894 "write_zeroes": true, 00:10:14.894 "zcopy": true, 00:10:14.894 "get_zone_info": false, 00:10:14.894 "zone_management": false, 00:10:14.894 "zone_append": false, 00:10:14.894 "compare": false, 00:10:14.894 "compare_and_write": false, 00:10:14.894 "abort": true, 00:10:14.894 "seek_hole": false, 00:10:14.894 "seek_data": false, 00:10:14.894 "copy": true, 00:10:14.894 "nvme_iov_md": false 00:10:14.894 }, 00:10:14.894 "memory_domains": [ 00:10:14.894 { 00:10:14.894 "dma_device_id": "system", 00:10:14.894 "dma_device_type": 1 00:10:14.894 }, 00:10:14.894 { 00:10:14.894 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:15.152 "dma_device_type": 2 00:10:15.152 } 00:10:15.152 ], 00:10:15.152 "driver_specific": {} 00:10:15.152 } 00:10:15.152 ] 00:10:15.152 15:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.152 15:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:15.152 15:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:15.152 15:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:15.152 15:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:15.152 15:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.152 15:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.152 BaseBdev3 00:10:15.152 15:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.152 15:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:15.152 15:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:15.152 15:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:15.152 15:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:15.152 15:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:15.152 15:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:15.152 15:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:15.152 15:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.152 15:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.152 15:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.152 15:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:15.152 15:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.152 15:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.152 [ 00:10:15.152 { 00:10:15.152 "name": "BaseBdev3", 00:10:15.152 "aliases": [ 00:10:15.152 "ef7ecfe9-923a-45b4-b6d0-931a8dbbaec5" 00:10:15.152 ], 00:10:15.152 "product_name": "Malloc disk", 00:10:15.152 "block_size": 512, 00:10:15.152 "num_blocks": 65536, 00:10:15.152 "uuid": "ef7ecfe9-923a-45b4-b6d0-931a8dbbaec5", 00:10:15.152 "assigned_rate_limits": { 00:10:15.152 "rw_ios_per_sec": 0, 00:10:15.152 "rw_mbytes_per_sec": 0, 00:10:15.152 "r_mbytes_per_sec": 0, 00:10:15.152 "w_mbytes_per_sec": 0 00:10:15.152 }, 00:10:15.152 "claimed": false, 00:10:15.152 "zoned": false, 00:10:15.152 "supported_io_types": { 00:10:15.152 "read": true, 00:10:15.152 "write": true, 00:10:15.152 "unmap": true, 00:10:15.152 "flush": true, 00:10:15.152 "reset": true, 00:10:15.152 "nvme_admin": false, 00:10:15.152 "nvme_io": false, 00:10:15.152 "nvme_io_md": false, 00:10:15.152 "write_zeroes": true, 00:10:15.153 "zcopy": true, 00:10:15.153 "get_zone_info": false, 00:10:15.153 "zone_management": false, 00:10:15.153 "zone_append": false, 00:10:15.153 "compare": false, 00:10:15.153 "compare_and_write": false, 00:10:15.153 "abort": true, 00:10:15.153 "seek_hole": false, 00:10:15.153 "seek_data": false, 00:10:15.153 "copy": true, 00:10:15.153 "nvme_iov_md": false 00:10:15.153 }, 00:10:15.153 "memory_domains": [ 00:10:15.153 { 00:10:15.153 "dma_device_id": "system", 00:10:15.153 "dma_device_type": 1 00:10:15.153 }, 00:10:15.153 { 00:10:15.153 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:15.153 "dma_device_type": 2 00:10:15.153 } 00:10:15.153 ], 00:10:15.153 "driver_specific": {} 00:10:15.153 } 00:10:15.153 ] 00:10:15.153 15:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.153 15:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:15.153 15:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:15.153 15:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:15.153 15:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:15.153 15:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.153 15:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.153 BaseBdev4 00:10:15.153 15:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.153 15:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:10:15.153 15:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:15.153 15:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:15.153 15:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:15.153 15:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:15.153 15:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:15.153 15:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:15.153 15:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.153 15:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.153 15:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.153 15:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:15.153 15:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.153 15:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.153 [ 00:10:15.153 { 00:10:15.153 "name": "BaseBdev4", 00:10:15.153 "aliases": [ 00:10:15.153 "ac327caa-19cf-4a58-bc6f-46c331c060a7" 00:10:15.153 ], 00:10:15.153 "product_name": "Malloc disk", 00:10:15.153 "block_size": 512, 00:10:15.153 "num_blocks": 65536, 00:10:15.153 "uuid": "ac327caa-19cf-4a58-bc6f-46c331c060a7", 00:10:15.153 "assigned_rate_limits": { 00:10:15.153 "rw_ios_per_sec": 0, 00:10:15.153 "rw_mbytes_per_sec": 0, 00:10:15.153 "r_mbytes_per_sec": 0, 00:10:15.153 "w_mbytes_per_sec": 0 00:10:15.153 }, 00:10:15.153 "claimed": false, 00:10:15.153 "zoned": false, 00:10:15.153 "supported_io_types": { 00:10:15.153 "read": true, 00:10:15.153 "write": true, 00:10:15.153 "unmap": true, 00:10:15.153 "flush": true, 00:10:15.153 "reset": true, 00:10:15.153 "nvme_admin": false, 00:10:15.153 "nvme_io": false, 00:10:15.153 "nvme_io_md": false, 00:10:15.153 "write_zeroes": true, 00:10:15.153 "zcopy": true, 00:10:15.153 "get_zone_info": false, 00:10:15.153 "zone_management": false, 00:10:15.153 "zone_append": false, 00:10:15.153 "compare": false, 00:10:15.153 "compare_and_write": false, 00:10:15.153 "abort": true, 00:10:15.153 "seek_hole": false, 00:10:15.153 "seek_data": false, 00:10:15.153 "copy": true, 00:10:15.153 "nvme_iov_md": false 00:10:15.153 }, 00:10:15.153 "memory_domains": [ 00:10:15.153 { 00:10:15.153 "dma_device_id": "system", 00:10:15.153 "dma_device_type": 1 00:10:15.153 }, 00:10:15.153 { 00:10:15.153 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:15.153 "dma_device_type": 2 00:10:15.153 } 00:10:15.153 ], 00:10:15.153 "driver_specific": {} 00:10:15.153 } 00:10:15.153 ] 00:10:15.153 15:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.153 15:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:15.153 15:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:15.153 15:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:15.153 15:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:15.153 15:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.153 15:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.153 [2024-11-20 15:18:01.551392] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:15.153 [2024-11-20 15:18:01.551565] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:15.153 [2024-11-20 15:18:01.551675] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:15.153 [2024-11-20 15:18:01.553910] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:15.153 [2024-11-20 15:18:01.554075] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:15.153 15:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.153 15:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:15.153 15:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:15.153 15:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:15.153 15:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:15.153 15:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:15.153 15:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:15.153 15:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:15.153 15:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:15.153 15:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:15.153 15:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:15.153 15:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:15.153 15:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:15.153 15:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.153 15:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.153 15:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.153 15:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:15.153 "name": "Existed_Raid", 00:10:15.153 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:15.153 "strip_size_kb": 64, 00:10:15.153 "state": "configuring", 00:10:15.153 "raid_level": "raid0", 00:10:15.153 "superblock": false, 00:10:15.153 "num_base_bdevs": 4, 00:10:15.153 "num_base_bdevs_discovered": 3, 00:10:15.153 "num_base_bdevs_operational": 4, 00:10:15.153 "base_bdevs_list": [ 00:10:15.153 { 00:10:15.153 "name": "BaseBdev1", 00:10:15.153 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:15.153 "is_configured": false, 00:10:15.153 "data_offset": 0, 00:10:15.153 "data_size": 0 00:10:15.153 }, 00:10:15.153 { 00:10:15.153 "name": "BaseBdev2", 00:10:15.153 "uuid": "1f476538-1cce-423d-b3c7-88fc64a2fe31", 00:10:15.153 "is_configured": true, 00:10:15.153 "data_offset": 0, 00:10:15.153 "data_size": 65536 00:10:15.153 }, 00:10:15.153 { 00:10:15.153 "name": "BaseBdev3", 00:10:15.153 "uuid": "ef7ecfe9-923a-45b4-b6d0-931a8dbbaec5", 00:10:15.153 "is_configured": true, 00:10:15.153 "data_offset": 0, 00:10:15.153 "data_size": 65536 00:10:15.153 }, 00:10:15.153 { 00:10:15.153 "name": "BaseBdev4", 00:10:15.153 "uuid": "ac327caa-19cf-4a58-bc6f-46c331c060a7", 00:10:15.153 "is_configured": true, 00:10:15.153 "data_offset": 0, 00:10:15.153 "data_size": 65536 00:10:15.153 } 00:10:15.153 ] 00:10:15.153 }' 00:10:15.153 15:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:15.153 15:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.721 15:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:15.721 15:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.721 15:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.721 [2024-11-20 15:18:01.955017] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:15.721 15:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.721 15:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:15.721 15:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:15.721 15:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:15.721 15:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:15.721 15:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:15.721 15:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:15.721 15:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:15.721 15:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:15.721 15:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:15.721 15:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:15.721 15:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:15.721 15:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:15.721 15:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.721 15:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.721 15:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.721 15:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:15.721 "name": "Existed_Raid", 00:10:15.721 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:15.721 "strip_size_kb": 64, 00:10:15.721 "state": "configuring", 00:10:15.721 "raid_level": "raid0", 00:10:15.721 "superblock": false, 00:10:15.721 "num_base_bdevs": 4, 00:10:15.721 "num_base_bdevs_discovered": 2, 00:10:15.721 "num_base_bdevs_operational": 4, 00:10:15.721 "base_bdevs_list": [ 00:10:15.721 { 00:10:15.721 "name": "BaseBdev1", 00:10:15.721 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:15.721 "is_configured": false, 00:10:15.721 "data_offset": 0, 00:10:15.721 "data_size": 0 00:10:15.721 }, 00:10:15.721 { 00:10:15.721 "name": null, 00:10:15.721 "uuid": "1f476538-1cce-423d-b3c7-88fc64a2fe31", 00:10:15.721 "is_configured": false, 00:10:15.721 "data_offset": 0, 00:10:15.721 "data_size": 65536 00:10:15.721 }, 00:10:15.721 { 00:10:15.721 "name": "BaseBdev3", 00:10:15.721 "uuid": "ef7ecfe9-923a-45b4-b6d0-931a8dbbaec5", 00:10:15.721 "is_configured": true, 00:10:15.721 "data_offset": 0, 00:10:15.721 "data_size": 65536 00:10:15.721 }, 00:10:15.721 { 00:10:15.721 "name": "BaseBdev4", 00:10:15.721 "uuid": "ac327caa-19cf-4a58-bc6f-46c331c060a7", 00:10:15.721 "is_configured": true, 00:10:15.721 "data_offset": 0, 00:10:15.721 "data_size": 65536 00:10:15.721 } 00:10:15.721 ] 00:10:15.721 }' 00:10:15.721 15:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:15.721 15:18:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.979 15:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:15.979 15:18:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.979 15:18:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.979 15:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:15.979 15:18:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.979 15:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:15.979 15:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:15.979 15:18:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.979 15:18:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.237 [2024-11-20 15:18:02.491774] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:16.237 BaseBdev1 00:10:16.237 15:18:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.237 15:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:16.237 15:18:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:16.237 15:18:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:16.237 15:18:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:16.237 15:18:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:16.237 15:18:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:16.237 15:18:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:16.237 15:18:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.237 15:18:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.237 15:18:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.237 15:18:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:16.237 15:18:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.237 15:18:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.237 [ 00:10:16.237 { 00:10:16.237 "name": "BaseBdev1", 00:10:16.237 "aliases": [ 00:10:16.237 "864c5a66-d1da-467b-97d9-142ab9add2e9" 00:10:16.237 ], 00:10:16.237 "product_name": "Malloc disk", 00:10:16.237 "block_size": 512, 00:10:16.237 "num_blocks": 65536, 00:10:16.237 "uuid": "864c5a66-d1da-467b-97d9-142ab9add2e9", 00:10:16.237 "assigned_rate_limits": { 00:10:16.237 "rw_ios_per_sec": 0, 00:10:16.237 "rw_mbytes_per_sec": 0, 00:10:16.237 "r_mbytes_per_sec": 0, 00:10:16.237 "w_mbytes_per_sec": 0 00:10:16.237 }, 00:10:16.237 "claimed": true, 00:10:16.237 "claim_type": "exclusive_write", 00:10:16.237 "zoned": false, 00:10:16.237 "supported_io_types": { 00:10:16.237 "read": true, 00:10:16.237 "write": true, 00:10:16.237 "unmap": true, 00:10:16.237 "flush": true, 00:10:16.237 "reset": true, 00:10:16.237 "nvme_admin": false, 00:10:16.237 "nvme_io": false, 00:10:16.237 "nvme_io_md": false, 00:10:16.237 "write_zeroes": true, 00:10:16.237 "zcopy": true, 00:10:16.237 "get_zone_info": false, 00:10:16.237 "zone_management": false, 00:10:16.237 "zone_append": false, 00:10:16.237 "compare": false, 00:10:16.237 "compare_and_write": false, 00:10:16.237 "abort": true, 00:10:16.237 "seek_hole": false, 00:10:16.237 "seek_data": false, 00:10:16.237 "copy": true, 00:10:16.237 "nvme_iov_md": false 00:10:16.237 }, 00:10:16.237 "memory_domains": [ 00:10:16.237 { 00:10:16.237 "dma_device_id": "system", 00:10:16.237 "dma_device_type": 1 00:10:16.237 }, 00:10:16.237 { 00:10:16.237 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:16.237 "dma_device_type": 2 00:10:16.237 } 00:10:16.237 ], 00:10:16.237 "driver_specific": {} 00:10:16.237 } 00:10:16.237 ] 00:10:16.237 15:18:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.237 15:18:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:16.237 15:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:16.237 15:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:16.237 15:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:16.237 15:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:16.237 15:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:16.237 15:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:16.237 15:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:16.237 15:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:16.237 15:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:16.237 15:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:16.237 15:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:16.237 15:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:16.237 15:18:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.237 15:18:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.237 15:18:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.237 15:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:16.237 "name": "Existed_Raid", 00:10:16.237 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:16.237 "strip_size_kb": 64, 00:10:16.237 "state": "configuring", 00:10:16.237 "raid_level": "raid0", 00:10:16.237 "superblock": false, 00:10:16.237 "num_base_bdevs": 4, 00:10:16.237 "num_base_bdevs_discovered": 3, 00:10:16.237 "num_base_bdevs_operational": 4, 00:10:16.237 "base_bdevs_list": [ 00:10:16.237 { 00:10:16.237 "name": "BaseBdev1", 00:10:16.237 "uuid": "864c5a66-d1da-467b-97d9-142ab9add2e9", 00:10:16.237 "is_configured": true, 00:10:16.237 "data_offset": 0, 00:10:16.237 "data_size": 65536 00:10:16.237 }, 00:10:16.237 { 00:10:16.237 "name": null, 00:10:16.237 "uuid": "1f476538-1cce-423d-b3c7-88fc64a2fe31", 00:10:16.237 "is_configured": false, 00:10:16.237 "data_offset": 0, 00:10:16.237 "data_size": 65536 00:10:16.237 }, 00:10:16.237 { 00:10:16.237 "name": "BaseBdev3", 00:10:16.237 "uuid": "ef7ecfe9-923a-45b4-b6d0-931a8dbbaec5", 00:10:16.237 "is_configured": true, 00:10:16.237 "data_offset": 0, 00:10:16.237 "data_size": 65536 00:10:16.237 }, 00:10:16.237 { 00:10:16.237 "name": "BaseBdev4", 00:10:16.237 "uuid": "ac327caa-19cf-4a58-bc6f-46c331c060a7", 00:10:16.237 "is_configured": true, 00:10:16.237 "data_offset": 0, 00:10:16.237 "data_size": 65536 00:10:16.237 } 00:10:16.237 ] 00:10:16.237 }' 00:10:16.237 15:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:16.237 15:18:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.495 15:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:16.495 15:18:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.495 15:18:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.495 15:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:16.495 15:18:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.753 15:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:16.753 15:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:16.753 15:18:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.753 15:18:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.753 [2024-11-20 15:18:02.995237] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:16.753 15:18:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.753 15:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:16.753 15:18:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:16.753 15:18:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:16.753 15:18:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:16.753 15:18:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:16.753 15:18:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:16.753 15:18:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:16.753 15:18:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:16.753 15:18:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:16.753 15:18:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:16.753 15:18:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:16.753 15:18:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.753 15:18:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.753 15:18:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:16.753 15:18:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.753 15:18:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:16.753 "name": "Existed_Raid", 00:10:16.753 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:16.753 "strip_size_kb": 64, 00:10:16.753 "state": "configuring", 00:10:16.753 "raid_level": "raid0", 00:10:16.753 "superblock": false, 00:10:16.753 "num_base_bdevs": 4, 00:10:16.753 "num_base_bdevs_discovered": 2, 00:10:16.753 "num_base_bdevs_operational": 4, 00:10:16.753 "base_bdevs_list": [ 00:10:16.753 { 00:10:16.753 "name": "BaseBdev1", 00:10:16.753 "uuid": "864c5a66-d1da-467b-97d9-142ab9add2e9", 00:10:16.753 "is_configured": true, 00:10:16.753 "data_offset": 0, 00:10:16.753 "data_size": 65536 00:10:16.753 }, 00:10:16.753 { 00:10:16.753 "name": null, 00:10:16.753 "uuid": "1f476538-1cce-423d-b3c7-88fc64a2fe31", 00:10:16.753 "is_configured": false, 00:10:16.753 "data_offset": 0, 00:10:16.753 "data_size": 65536 00:10:16.753 }, 00:10:16.753 { 00:10:16.753 "name": null, 00:10:16.753 "uuid": "ef7ecfe9-923a-45b4-b6d0-931a8dbbaec5", 00:10:16.753 "is_configured": false, 00:10:16.753 "data_offset": 0, 00:10:16.753 "data_size": 65536 00:10:16.753 }, 00:10:16.753 { 00:10:16.753 "name": "BaseBdev4", 00:10:16.753 "uuid": "ac327caa-19cf-4a58-bc6f-46c331c060a7", 00:10:16.753 "is_configured": true, 00:10:16.753 "data_offset": 0, 00:10:16.753 "data_size": 65536 00:10:16.753 } 00:10:16.753 ] 00:10:16.753 }' 00:10:16.753 15:18:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:16.753 15:18:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.011 15:18:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:17.011 15:18:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:17.011 15:18:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.011 15:18:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.011 15:18:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.011 15:18:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:17.011 15:18:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:17.012 15:18:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.012 15:18:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.012 [2024-11-20 15:18:03.491021] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:17.271 15:18:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.271 15:18:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:17.271 15:18:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:17.271 15:18:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:17.271 15:18:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:17.271 15:18:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:17.271 15:18:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:17.271 15:18:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:17.271 15:18:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:17.271 15:18:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:17.271 15:18:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:17.271 15:18:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:17.271 15:18:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.271 15:18:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:17.271 15:18:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.271 15:18:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.271 15:18:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:17.271 "name": "Existed_Raid", 00:10:17.271 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:17.271 "strip_size_kb": 64, 00:10:17.271 "state": "configuring", 00:10:17.271 "raid_level": "raid0", 00:10:17.271 "superblock": false, 00:10:17.271 "num_base_bdevs": 4, 00:10:17.271 "num_base_bdevs_discovered": 3, 00:10:17.271 "num_base_bdevs_operational": 4, 00:10:17.271 "base_bdevs_list": [ 00:10:17.271 { 00:10:17.271 "name": "BaseBdev1", 00:10:17.271 "uuid": "864c5a66-d1da-467b-97d9-142ab9add2e9", 00:10:17.271 "is_configured": true, 00:10:17.271 "data_offset": 0, 00:10:17.271 "data_size": 65536 00:10:17.271 }, 00:10:17.271 { 00:10:17.271 "name": null, 00:10:17.271 "uuid": "1f476538-1cce-423d-b3c7-88fc64a2fe31", 00:10:17.271 "is_configured": false, 00:10:17.271 "data_offset": 0, 00:10:17.271 "data_size": 65536 00:10:17.271 }, 00:10:17.271 { 00:10:17.271 "name": "BaseBdev3", 00:10:17.271 "uuid": "ef7ecfe9-923a-45b4-b6d0-931a8dbbaec5", 00:10:17.271 "is_configured": true, 00:10:17.271 "data_offset": 0, 00:10:17.271 "data_size": 65536 00:10:17.271 }, 00:10:17.271 { 00:10:17.271 "name": "BaseBdev4", 00:10:17.271 "uuid": "ac327caa-19cf-4a58-bc6f-46c331c060a7", 00:10:17.271 "is_configured": true, 00:10:17.271 "data_offset": 0, 00:10:17.271 "data_size": 65536 00:10:17.271 } 00:10:17.271 ] 00:10:17.271 }' 00:10:17.271 15:18:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:17.271 15:18:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.531 15:18:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:17.531 15:18:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.531 15:18:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.531 15:18:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:17.531 15:18:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.531 15:18:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:17.531 15:18:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:17.531 15:18:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.531 15:18:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.531 [2024-11-20 15:18:03.951048] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:17.791 15:18:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.791 15:18:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:17.791 15:18:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:17.791 15:18:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:17.791 15:18:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:17.791 15:18:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:17.791 15:18:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:17.791 15:18:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:17.791 15:18:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:17.791 15:18:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:17.791 15:18:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:17.791 15:18:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:17.791 15:18:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.791 15:18:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:17.791 15:18:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.791 15:18:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.791 15:18:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:17.791 "name": "Existed_Raid", 00:10:17.791 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:17.791 "strip_size_kb": 64, 00:10:17.791 "state": "configuring", 00:10:17.791 "raid_level": "raid0", 00:10:17.791 "superblock": false, 00:10:17.791 "num_base_bdevs": 4, 00:10:17.791 "num_base_bdevs_discovered": 2, 00:10:17.791 "num_base_bdevs_operational": 4, 00:10:17.791 "base_bdevs_list": [ 00:10:17.791 { 00:10:17.791 "name": null, 00:10:17.791 "uuid": "864c5a66-d1da-467b-97d9-142ab9add2e9", 00:10:17.791 "is_configured": false, 00:10:17.791 "data_offset": 0, 00:10:17.791 "data_size": 65536 00:10:17.791 }, 00:10:17.791 { 00:10:17.791 "name": null, 00:10:17.791 "uuid": "1f476538-1cce-423d-b3c7-88fc64a2fe31", 00:10:17.791 "is_configured": false, 00:10:17.791 "data_offset": 0, 00:10:17.791 "data_size": 65536 00:10:17.791 }, 00:10:17.791 { 00:10:17.791 "name": "BaseBdev3", 00:10:17.791 "uuid": "ef7ecfe9-923a-45b4-b6d0-931a8dbbaec5", 00:10:17.791 "is_configured": true, 00:10:17.791 "data_offset": 0, 00:10:17.791 "data_size": 65536 00:10:17.791 }, 00:10:17.791 { 00:10:17.791 "name": "BaseBdev4", 00:10:17.791 "uuid": "ac327caa-19cf-4a58-bc6f-46c331c060a7", 00:10:17.791 "is_configured": true, 00:10:17.791 "data_offset": 0, 00:10:17.791 "data_size": 65536 00:10:17.791 } 00:10:17.791 ] 00:10:17.791 }' 00:10:17.791 15:18:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:17.791 15:18:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.050 15:18:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:18.050 15:18:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:18.050 15:18:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.050 15:18:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.050 15:18:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.050 15:18:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:18.050 15:18:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:18.050 15:18:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.050 15:18:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.050 [2024-11-20 15:18:04.523530] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:18.050 15:18:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.050 15:18:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:18.050 15:18:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:18.050 15:18:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:18.050 15:18:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:18.050 15:18:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:18.050 15:18:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:18.050 15:18:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:18.050 15:18:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:18.050 15:18:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:18.050 15:18:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:18.367 15:18:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:18.367 15:18:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:18.367 15:18:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.367 15:18:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.367 15:18:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.367 15:18:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:18.367 "name": "Existed_Raid", 00:10:18.367 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:18.367 "strip_size_kb": 64, 00:10:18.367 "state": "configuring", 00:10:18.367 "raid_level": "raid0", 00:10:18.367 "superblock": false, 00:10:18.367 "num_base_bdevs": 4, 00:10:18.367 "num_base_bdevs_discovered": 3, 00:10:18.367 "num_base_bdevs_operational": 4, 00:10:18.367 "base_bdevs_list": [ 00:10:18.367 { 00:10:18.367 "name": null, 00:10:18.367 "uuid": "864c5a66-d1da-467b-97d9-142ab9add2e9", 00:10:18.367 "is_configured": false, 00:10:18.367 "data_offset": 0, 00:10:18.367 "data_size": 65536 00:10:18.367 }, 00:10:18.367 { 00:10:18.367 "name": "BaseBdev2", 00:10:18.367 "uuid": "1f476538-1cce-423d-b3c7-88fc64a2fe31", 00:10:18.367 "is_configured": true, 00:10:18.367 "data_offset": 0, 00:10:18.367 "data_size": 65536 00:10:18.367 }, 00:10:18.367 { 00:10:18.367 "name": "BaseBdev3", 00:10:18.367 "uuid": "ef7ecfe9-923a-45b4-b6d0-931a8dbbaec5", 00:10:18.367 "is_configured": true, 00:10:18.367 "data_offset": 0, 00:10:18.367 "data_size": 65536 00:10:18.367 }, 00:10:18.367 { 00:10:18.367 "name": "BaseBdev4", 00:10:18.367 "uuid": "ac327caa-19cf-4a58-bc6f-46c331c060a7", 00:10:18.367 "is_configured": true, 00:10:18.367 "data_offset": 0, 00:10:18.367 "data_size": 65536 00:10:18.367 } 00:10:18.367 ] 00:10:18.367 }' 00:10:18.367 15:18:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:18.367 15:18:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.644 15:18:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:18.644 15:18:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.644 15:18:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.644 15:18:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:18.644 15:18:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.644 15:18:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:18.644 15:18:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:18.644 15:18:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.644 15:18:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.644 15:18:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:18.644 15:18:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.644 15:18:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 864c5a66-d1da-467b-97d9-142ab9add2e9 00:10:18.644 15:18:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.644 15:18:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.644 [2024-11-20 15:18:05.046405] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:18.644 [2024-11-20 15:18:05.046628] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:18.644 [2024-11-20 15:18:05.046649] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:10:18.644 [2024-11-20 15:18:05.047008] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:10:18.644 [2024-11-20 15:18:05.047163] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:18.644 [2024-11-20 15:18:05.047178] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:10:18.644 [2024-11-20 15:18:05.047450] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:18.644 NewBaseBdev 00:10:18.644 15:18:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.644 15:18:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:18.644 15:18:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:10:18.644 15:18:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:18.644 15:18:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:18.644 15:18:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:18.644 15:18:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:18.644 15:18:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:18.644 15:18:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.644 15:18:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.644 15:18:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.644 15:18:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:18.644 15:18:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.644 15:18:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.644 [ 00:10:18.644 { 00:10:18.644 "name": "NewBaseBdev", 00:10:18.644 "aliases": [ 00:10:18.644 "864c5a66-d1da-467b-97d9-142ab9add2e9" 00:10:18.644 ], 00:10:18.644 "product_name": "Malloc disk", 00:10:18.644 "block_size": 512, 00:10:18.644 "num_blocks": 65536, 00:10:18.644 "uuid": "864c5a66-d1da-467b-97d9-142ab9add2e9", 00:10:18.644 "assigned_rate_limits": { 00:10:18.644 "rw_ios_per_sec": 0, 00:10:18.644 "rw_mbytes_per_sec": 0, 00:10:18.644 "r_mbytes_per_sec": 0, 00:10:18.644 "w_mbytes_per_sec": 0 00:10:18.644 }, 00:10:18.644 "claimed": true, 00:10:18.644 "claim_type": "exclusive_write", 00:10:18.644 "zoned": false, 00:10:18.644 "supported_io_types": { 00:10:18.644 "read": true, 00:10:18.644 "write": true, 00:10:18.644 "unmap": true, 00:10:18.644 "flush": true, 00:10:18.644 "reset": true, 00:10:18.644 "nvme_admin": false, 00:10:18.644 "nvme_io": false, 00:10:18.644 "nvme_io_md": false, 00:10:18.644 "write_zeroes": true, 00:10:18.644 "zcopy": true, 00:10:18.644 "get_zone_info": false, 00:10:18.644 "zone_management": false, 00:10:18.644 "zone_append": false, 00:10:18.644 "compare": false, 00:10:18.644 "compare_and_write": false, 00:10:18.644 "abort": true, 00:10:18.644 "seek_hole": false, 00:10:18.644 "seek_data": false, 00:10:18.644 "copy": true, 00:10:18.644 "nvme_iov_md": false 00:10:18.644 }, 00:10:18.644 "memory_domains": [ 00:10:18.644 { 00:10:18.644 "dma_device_id": "system", 00:10:18.644 "dma_device_type": 1 00:10:18.644 }, 00:10:18.644 { 00:10:18.644 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:18.644 "dma_device_type": 2 00:10:18.644 } 00:10:18.644 ], 00:10:18.644 "driver_specific": {} 00:10:18.644 } 00:10:18.644 ] 00:10:18.645 15:18:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.645 15:18:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:18.645 15:18:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:10:18.645 15:18:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:18.645 15:18:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:18.645 15:18:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:18.645 15:18:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:18.645 15:18:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:18.645 15:18:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:18.645 15:18:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:18.645 15:18:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:18.645 15:18:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:18.645 15:18:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:18.645 15:18:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:18.645 15:18:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.645 15:18:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.645 15:18:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.903 15:18:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:18.903 "name": "Existed_Raid", 00:10:18.903 "uuid": "ce37bbf8-339a-43e1-9c21-26f57a72498d", 00:10:18.903 "strip_size_kb": 64, 00:10:18.903 "state": "online", 00:10:18.903 "raid_level": "raid0", 00:10:18.903 "superblock": false, 00:10:18.903 "num_base_bdevs": 4, 00:10:18.903 "num_base_bdevs_discovered": 4, 00:10:18.903 "num_base_bdevs_operational": 4, 00:10:18.903 "base_bdevs_list": [ 00:10:18.903 { 00:10:18.903 "name": "NewBaseBdev", 00:10:18.903 "uuid": "864c5a66-d1da-467b-97d9-142ab9add2e9", 00:10:18.903 "is_configured": true, 00:10:18.903 "data_offset": 0, 00:10:18.903 "data_size": 65536 00:10:18.903 }, 00:10:18.903 { 00:10:18.903 "name": "BaseBdev2", 00:10:18.903 "uuid": "1f476538-1cce-423d-b3c7-88fc64a2fe31", 00:10:18.903 "is_configured": true, 00:10:18.903 "data_offset": 0, 00:10:18.903 "data_size": 65536 00:10:18.903 }, 00:10:18.903 { 00:10:18.903 "name": "BaseBdev3", 00:10:18.903 "uuid": "ef7ecfe9-923a-45b4-b6d0-931a8dbbaec5", 00:10:18.903 "is_configured": true, 00:10:18.903 "data_offset": 0, 00:10:18.903 "data_size": 65536 00:10:18.903 }, 00:10:18.903 { 00:10:18.903 "name": "BaseBdev4", 00:10:18.903 "uuid": "ac327caa-19cf-4a58-bc6f-46c331c060a7", 00:10:18.903 "is_configured": true, 00:10:18.903 "data_offset": 0, 00:10:18.903 "data_size": 65536 00:10:18.903 } 00:10:18.903 ] 00:10:18.903 }' 00:10:18.903 15:18:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:18.903 15:18:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.162 15:18:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:19.162 15:18:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:19.162 15:18:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:19.162 15:18:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:19.162 15:18:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:19.162 15:18:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:19.162 15:18:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:19.162 15:18:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:19.162 15:18:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.162 15:18:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.162 [2024-11-20 15:18:05.522136] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:19.163 15:18:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.163 15:18:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:19.163 "name": "Existed_Raid", 00:10:19.163 "aliases": [ 00:10:19.163 "ce37bbf8-339a-43e1-9c21-26f57a72498d" 00:10:19.163 ], 00:10:19.163 "product_name": "Raid Volume", 00:10:19.163 "block_size": 512, 00:10:19.163 "num_blocks": 262144, 00:10:19.163 "uuid": "ce37bbf8-339a-43e1-9c21-26f57a72498d", 00:10:19.163 "assigned_rate_limits": { 00:10:19.163 "rw_ios_per_sec": 0, 00:10:19.163 "rw_mbytes_per_sec": 0, 00:10:19.163 "r_mbytes_per_sec": 0, 00:10:19.163 "w_mbytes_per_sec": 0 00:10:19.163 }, 00:10:19.163 "claimed": false, 00:10:19.163 "zoned": false, 00:10:19.163 "supported_io_types": { 00:10:19.163 "read": true, 00:10:19.163 "write": true, 00:10:19.163 "unmap": true, 00:10:19.163 "flush": true, 00:10:19.163 "reset": true, 00:10:19.163 "nvme_admin": false, 00:10:19.163 "nvme_io": false, 00:10:19.163 "nvme_io_md": false, 00:10:19.163 "write_zeroes": true, 00:10:19.163 "zcopy": false, 00:10:19.163 "get_zone_info": false, 00:10:19.163 "zone_management": false, 00:10:19.163 "zone_append": false, 00:10:19.163 "compare": false, 00:10:19.163 "compare_and_write": false, 00:10:19.163 "abort": false, 00:10:19.163 "seek_hole": false, 00:10:19.163 "seek_data": false, 00:10:19.163 "copy": false, 00:10:19.163 "nvme_iov_md": false 00:10:19.163 }, 00:10:19.163 "memory_domains": [ 00:10:19.163 { 00:10:19.163 "dma_device_id": "system", 00:10:19.163 "dma_device_type": 1 00:10:19.163 }, 00:10:19.163 { 00:10:19.163 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:19.163 "dma_device_type": 2 00:10:19.163 }, 00:10:19.163 { 00:10:19.163 "dma_device_id": "system", 00:10:19.163 "dma_device_type": 1 00:10:19.163 }, 00:10:19.163 { 00:10:19.163 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:19.163 "dma_device_type": 2 00:10:19.163 }, 00:10:19.163 { 00:10:19.163 "dma_device_id": "system", 00:10:19.163 "dma_device_type": 1 00:10:19.163 }, 00:10:19.163 { 00:10:19.163 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:19.163 "dma_device_type": 2 00:10:19.163 }, 00:10:19.163 { 00:10:19.163 "dma_device_id": "system", 00:10:19.163 "dma_device_type": 1 00:10:19.163 }, 00:10:19.163 { 00:10:19.163 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:19.163 "dma_device_type": 2 00:10:19.163 } 00:10:19.163 ], 00:10:19.163 "driver_specific": { 00:10:19.163 "raid": { 00:10:19.163 "uuid": "ce37bbf8-339a-43e1-9c21-26f57a72498d", 00:10:19.163 "strip_size_kb": 64, 00:10:19.163 "state": "online", 00:10:19.163 "raid_level": "raid0", 00:10:19.163 "superblock": false, 00:10:19.163 "num_base_bdevs": 4, 00:10:19.163 "num_base_bdevs_discovered": 4, 00:10:19.163 "num_base_bdevs_operational": 4, 00:10:19.163 "base_bdevs_list": [ 00:10:19.163 { 00:10:19.163 "name": "NewBaseBdev", 00:10:19.163 "uuid": "864c5a66-d1da-467b-97d9-142ab9add2e9", 00:10:19.163 "is_configured": true, 00:10:19.163 "data_offset": 0, 00:10:19.163 "data_size": 65536 00:10:19.163 }, 00:10:19.163 { 00:10:19.163 "name": "BaseBdev2", 00:10:19.163 "uuid": "1f476538-1cce-423d-b3c7-88fc64a2fe31", 00:10:19.163 "is_configured": true, 00:10:19.163 "data_offset": 0, 00:10:19.163 "data_size": 65536 00:10:19.163 }, 00:10:19.163 { 00:10:19.163 "name": "BaseBdev3", 00:10:19.163 "uuid": "ef7ecfe9-923a-45b4-b6d0-931a8dbbaec5", 00:10:19.163 "is_configured": true, 00:10:19.163 "data_offset": 0, 00:10:19.163 "data_size": 65536 00:10:19.163 }, 00:10:19.163 { 00:10:19.163 "name": "BaseBdev4", 00:10:19.163 "uuid": "ac327caa-19cf-4a58-bc6f-46c331c060a7", 00:10:19.163 "is_configured": true, 00:10:19.163 "data_offset": 0, 00:10:19.163 "data_size": 65536 00:10:19.163 } 00:10:19.163 ] 00:10:19.163 } 00:10:19.163 } 00:10:19.163 }' 00:10:19.163 15:18:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:19.163 15:18:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:19.163 BaseBdev2 00:10:19.163 BaseBdev3 00:10:19.163 BaseBdev4' 00:10:19.163 15:18:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:19.422 15:18:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:19.422 15:18:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:19.422 15:18:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:19.422 15:18:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.422 15:18:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:19.422 15:18:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.422 15:18:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.422 15:18:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:19.422 15:18:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:19.422 15:18:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:19.422 15:18:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:19.422 15:18:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:19.423 15:18:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.423 15:18:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.423 15:18:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.423 15:18:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:19.423 15:18:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:19.423 15:18:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:19.423 15:18:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:19.423 15:18:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:19.423 15:18:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.423 15:18:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.423 15:18:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.423 15:18:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:19.423 15:18:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:19.423 15:18:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:19.423 15:18:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:19.423 15:18:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:19.423 15:18:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.423 15:18:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.423 15:18:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.423 15:18:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:19.423 15:18:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:19.423 15:18:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:19.423 15:18:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.423 15:18:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.423 [2024-11-20 15:18:05.853377] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:19.423 [2024-11-20 15:18:05.853409] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:19.423 [2024-11-20 15:18:05.853489] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:19.423 [2024-11-20 15:18:05.853557] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:19.423 [2024-11-20 15:18:05.853568] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:10:19.423 15:18:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.423 15:18:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 69228 00:10:19.423 15:18:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 69228 ']' 00:10:19.423 15:18:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 69228 00:10:19.423 15:18:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:10:19.423 15:18:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:19.423 15:18:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69228 00:10:19.423 killing process with pid 69228 00:10:19.423 15:18:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:19.423 15:18:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:19.423 15:18:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69228' 00:10:19.423 15:18:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 69228 00:10:19.423 [2024-11-20 15:18:05.902178] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:19.423 15:18:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 69228 00:10:19.991 [2024-11-20 15:18:06.301164] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:20.996 15:18:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:10:20.996 ************************************ 00:10:20.996 END TEST raid_state_function_test 00:10:20.996 ************************************ 00:10:20.996 00:10:20.996 real 0m11.283s 00:10:20.996 user 0m17.925s 00:10:20.996 sys 0m2.202s 00:10:20.996 15:18:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:20.996 15:18:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.257 15:18:07 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:10:21.257 15:18:07 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:21.257 15:18:07 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:21.257 15:18:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:21.257 ************************************ 00:10:21.257 START TEST raid_state_function_test_sb 00:10:21.257 ************************************ 00:10:21.257 15:18:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 4 true 00:10:21.257 15:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:10:21.257 15:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:10:21.257 15:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:10:21.257 15:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:21.257 15:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:21.257 15:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:21.257 15:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:21.257 15:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:21.257 15:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:21.257 15:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:21.257 15:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:21.257 15:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:21.257 15:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:21.257 15:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:21.257 15:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:21.257 15:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:10:21.257 15:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:21.257 15:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:21.257 15:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:21.257 15:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:21.257 15:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:21.257 15:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:21.257 Process raid pid: 69895 00:10:21.257 15:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:21.257 15:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:21.257 15:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:10:21.257 15:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:21.257 15:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:21.257 15:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:10:21.257 15:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:10:21.257 15:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=69895 00:10:21.257 15:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 69895' 00:10:21.257 15:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 69895 00:10:21.257 15:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:21.257 15:18:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 69895 ']' 00:10:21.257 15:18:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:21.257 15:18:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:21.257 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:21.257 15:18:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:21.257 15:18:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:21.257 15:18:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:21.257 [2024-11-20 15:18:07.647310] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:10:21.257 [2024-11-20 15:18:07.647464] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:21.517 [2024-11-20 15:18:07.832007] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:21.517 [2024-11-20 15:18:07.952901] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:21.776 [2024-11-20 15:18:08.163825] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:21.776 [2024-11-20 15:18:08.163870] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:22.035 15:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:22.035 15:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:10:22.035 15:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:22.035 15:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.035 15:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:22.035 [2024-11-20 15:18:08.484676] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:22.035 [2024-11-20 15:18:08.484731] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:22.035 [2024-11-20 15:18:08.484743] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:22.035 [2024-11-20 15:18:08.484756] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:22.035 [2024-11-20 15:18:08.484764] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:22.035 [2024-11-20 15:18:08.484776] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:22.035 [2024-11-20 15:18:08.484784] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:22.035 [2024-11-20 15:18:08.484797] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:22.035 15:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.035 15:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:22.035 15:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:22.035 15:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:22.035 15:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:22.035 15:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:22.035 15:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:22.035 15:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:22.035 15:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:22.035 15:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:22.035 15:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:22.035 15:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:22.035 15:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:22.035 15:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.035 15:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:22.294 15:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.294 15:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:22.294 "name": "Existed_Raid", 00:10:22.294 "uuid": "db661cb2-34ee-4972-97b3-a74387821044", 00:10:22.294 "strip_size_kb": 64, 00:10:22.294 "state": "configuring", 00:10:22.294 "raid_level": "raid0", 00:10:22.294 "superblock": true, 00:10:22.294 "num_base_bdevs": 4, 00:10:22.294 "num_base_bdevs_discovered": 0, 00:10:22.294 "num_base_bdevs_operational": 4, 00:10:22.294 "base_bdevs_list": [ 00:10:22.294 { 00:10:22.294 "name": "BaseBdev1", 00:10:22.294 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:22.294 "is_configured": false, 00:10:22.294 "data_offset": 0, 00:10:22.294 "data_size": 0 00:10:22.294 }, 00:10:22.294 { 00:10:22.294 "name": "BaseBdev2", 00:10:22.294 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:22.294 "is_configured": false, 00:10:22.294 "data_offset": 0, 00:10:22.294 "data_size": 0 00:10:22.294 }, 00:10:22.294 { 00:10:22.294 "name": "BaseBdev3", 00:10:22.294 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:22.294 "is_configured": false, 00:10:22.294 "data_offset": 0, 00:10:22.294 "data_size": 0 00:10:22.294 }, 00:10:22.294 { 00:10:22.294 "name": "BaseBdev4", 00:10:22.294 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:22.294 "is_configured": false, 00:10:22.294 "data_offset": 0, 00:10:22.294 "data_size": 0 00:10:22.294 } 00:10:22.294 ] 00:10:22.294 }' 00:10:22.294 15:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:22.294 15:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:22.554 15:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:22.554 15:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.554 15:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:22.554 [2024-11-20 15:18:08.904028] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:22.554 [2024-11-20 15:18:08.904202] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:22.554 15:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.554 15:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:22.554 15:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.554 15:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:22.554 [2024-11-20 15:18:08.916006] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:22.554 [2024-11-20 15:18:08.916164] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:22.554 [2024-11-20 15:18:08.916248] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:22.554 [2024-11-20 15:18:08.916293] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:22.554 [2024-11-20 15:18:08.916323] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:22.554 [2024-11-20 15:18:08.916357] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:22.554 [2024-11-20 15:18:08.916433] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:22.554 [2024-11-20 15:18:08.916475] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:22.554 15:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.554 15:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:22.554 15:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.554 15:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:22.554 [2024-11-20 15:18:08.966867] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:22.554 BaseBdev1 00:10:22.554 15:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.554 15:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:22.554 15:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:22.554 15:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:22.554 15:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:22.554 15:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:22.554 15:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:22.554 15:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:22.554 15:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.554 15:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:22.554 15:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.554 15:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:22.554 15:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.554 15:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:22.554 [ 00:10:22.554 { 00:10:22.554 "name": "BaseBdev1", 00:10:22.554 "aliases": [ 00:10:22.554 "f7e7204a-beb2-451f-b5de-7e39e7d03499" 00:10:22.554 ], 00:10:22.554 "product_name": "Malloc disk", 00:10:22.554 "block_size": 512, 00:10:22.554 "num_blocks": 65536, 00:10:22.554 "uuid": "f7e7204a-beb2-451f-b5de-7e39e7d03499", 00:10:22.554 "assigned_rate_limits": { 00:10:22.554 "rw_ios_per_sec": 0, 00:10:22.554 "rw_mbytes_per_sec": 0, 00:10:22.554 "r_mbytes_per_sec": 0, 00:10:22.554 "w_mbytes_per_sec": 0 00:10:22.554 }, 00:10:22.554 "claimed": true, 00:10:22.554 "claim_type": "exclusive_write", 00:10:22.554 "zoned": false, 00:10:22.554 "supported_io_types": { 00:10:22.554 "read": true, 00:10:22.554 "write": true, 00:10:22.554 "unmap": true, 00:10:22.554 "flush": true, 00:10:22.554 "reset": true, 00:10:22.554 "nvme_admin": false, 00:10:22.554 "nvme_io": false, 00:10:22.554 "nvme_io_md": false, 00:10:22.554 "write_zeroes": true, 00:10:22.554 "zcopy": true, 00:10:22.554 "get_zone_info": false, 00:10:22.554 "zone_management": false, 00:10:22.554 "zone_append": false, 00:10:22.554 "compare": false, 00:10:22.554 "compare_and_write": false, 00:10:22.554 "abort": true, 00:10:22.554 "seek_hole": false, 00:10:22.554 "seek_data": false, 00:10:22.554 "copy": true, 00:10:22.554 "nvme_iov_md": false 00:10:22.554 }, 00:10:22.554 "memory_domains": [ 00:10:22.554 { 00:10:22.554 "dma_device_id": "system", 00:10:22.554 "dma_device_type": 1 00:10:22.554 }, 00:10:22.554 { 00:10:22.554 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:22.554 "dma_device_type": 2 00:10:22.554 } 00:10:22.554 ], 00:10:22.554 "driver_specific": {} 00:10:22.554 } 00:10:22.554 ] 00:10:22.554 15:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.554 15:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:22.554 15:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:22.554 15:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:22.554 15:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:22.554 15:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:22.554 15:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:22.554 15:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:22.554 15:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:22.554 15:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:22.554 15:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:22.554 15:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:22.554 15:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:22.554 15:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:22.554 15:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.554 15:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:22.554 15:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.813 15:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:22.813 "name": "Existed_Raid", 00:10:22.813 "uuid": "dac530c7-3af2-42af-a754-0829b288dde4", 00:10:22.813 "strip_size_kb": 64, 00:10:22.813 "state": "configuring", 00:10:22.813 "raid_level": "raid0", 00:10:22.813 "superblock": true, 00:10:22.813 "num_base_bdevs": 4, 00:10:22.813 "num_base_bdevs_discovered": 1, 00:10:22.813 "num_base_bdevs_operational": 4, 00:10:22.813 "base_bdevs_list": [ 00:10:22.813 { 00:10:22.813 "name": "BaseBdev1", 00:10:22.813 "uuid": "f7e7204a-beb2-451f-b5de-7e39e7d03499", 00:10:22.813 "is_configured": true, 00:10:22.813 "data_offset": 2048, 00:10:22.813 "data_size": 63488 00:10:22.813 }, 00:10:22.813 { 00:10:22.813 "name": "BaseBdev2", 00:10:22.813 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:22.813 "is_configured": false, 00:10:22.813 "data_offset": 0, 00:10:22.813 "data_size": 0 00:10:22.813 }, 00:10:22.813 { 00:10:22.813 "name": "BaseBdev3", 00:10:22.813 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:22.813 "is_configured": false, 00:10:22.813 "data_offset": 0, 00:10:22.813 "data_size": 0 00:10:22.813 }, 00:10:22.813 { 00:10:22.813 "name": "BaseBdev4", 00:10:22.813 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:22.813 "is_configured": false, 00:10:22.813 "data_offset": 0, 00:10:22.813 "data_size": 0 00:10:22.813 } 00:10:22.813 ] 00:10:22.813 }' 00:10:22.813 15:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:22.813 15:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:23.072 15:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:23.072 15:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.072 15:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:23.072 [2024-11-20 15:18:09.478246] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:23.072 [2024-11-20 15:18:09.478440] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:23.072 15:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.072 15:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:23.072 15:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.072 15:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:23.072 [2024-11-20 15:18:09.490293] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:23.072 [2024-11-20 15:18:09.492537] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:23.072 [2024-11-20 15:18:09.492588] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:23.072 [2024-11-20 15:18:09.492601] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:23.072 [2024-11-20 15:18:09.492616] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:23.072 [2024-11-20 15:18:09.492625] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:23.072 [2024-11-20 15:18:09.492638] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:23.072 15:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.072 15:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:23.072 15:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:23.072 15:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:23.072 15:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:23.072 15:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:23.072 15:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:23.072 15:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:23.072 15:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:23.072 15:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:23.072 15:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:23.072 15:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:23.072 15:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:23.072 15:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:23.072 15:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:23.072 15:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.072 15:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:23.072 15:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.073 15:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:23.073 "name": "Existed_Raid", 00:10:23.073 "uuid": "49e752de-93be-4adc-9f77-8172b268b043", 00:10:23.073 "strip_size_kb": 64, 00:10:23.073 "state": "configuring", 00:10:23.073 "raid_level": "raid0", 00:10:23.073 "superblock": true, 00:10:23.073 "num_base_bdevs": 4, 00:10:23.073 "num_base_bdevs_discovered": 1, 00:10:23.073 "num_base_bdevs_operational": 4, 00:10:23.073 "base_bdevs_list": [ 00:10:23.073 { 00:10:23.073 "name": "BaseBdev1", 00:10:23.073 "uuid": "f7e7204a-beb2-451f-b5de-7e39e7d03499", 00:10:23.073 "is_configured": true, 00:10:23.073 "data_offset": 2048, 00:10:23.073 "data_size": 63488 00:10:23.073 }, 00:10:23.073 { 00:10:23.073 "name": "BaseBdev2", 00:10:23.073 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:23.073 "is_configured": false, 00:10:23.073 "data_offset": 0, 00:10:23.073 "data_size": 0 00:10:23.073 }, 00:10:23.073 { 00:10:23.073 "name": "BaseBdev3", 00:10:23.073 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:23.073 "is_configured": false, 00:10:23.073 "data_offset": 0, 00:10:23.073 "data_size": 0 00:10:23.073 }, 00:10:23.073 { 00:10:23.073 "name": "BaseBdev4", 00:10:23.073 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:23.073 "is_configured": false, 00:10:23.073 "data_offset": 0, 00:10:23.073 "data_size": 0 00:10:23.073 } 00:10:23.073 ] 00:10:23.073 }' 00:10:23.073 15:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:23.073 15:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:23.641 15:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:23.641 15:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.641 15:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:23.641 [2024-11-20 15:18:09.976254] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:23.641 BaseBdev2 00:10:23.641 15:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.641 15:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:23.641 15:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:23.641 15:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:23.641 15:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:23.641 15:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:23.641 15:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:23.641 15:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:23.641 15:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.641 15:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:23.641 15:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.641 15:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:23.641 15:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.641 15:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:23.641 [ 00:10:23.641 { 00:10:23.641 "name": "BaseBdev2", 00:10:23.641 "aliases": [ 00:10:23.641 "eb4a12d2-7e46-4a17-8d56-97b4a902faa7" 00:10:23.641 ], 00:10:23.641 "product_name": "Malloc disk", 00:10:23.641 "block_size": 512, 00:10:23.641 "num_blocks": 65536, 00:10:23.641 "uuid": "eb4a12d2-7e46-4a17-8d56-97b4a902faa7", 00:10:23.641 "assigned_rate_limits": { 00:10:23.641 "rw_ios_per_sec": 0, 00:10:23.641 "rw_mbytes_per_sec": 0, 00:10:23.641 "r_mbytes_per_sec": 0, 00:10:23.641 "w_mbytes_per_sec": 0 00:10:23.641 }, 00:10:23.641 "claimed": true, 00:10:23.641 "claim_type": "exclusive_write", 00:10:23.641 "zoned": false, 00:10:23.641 "supported_io_types": { 00:10:23.641 "read": true, 00:10:23.641 "write": true, 00:10:23.641 "unmap": true, 00:10:23.641 "flush": true, 00:10:23.641 "reset": true, 00:10:23.641 "nvme_admin": false, 00:10:23.641 "nvme_io": false, 00:10:23.641 "nvme_io_md": false, 00:10:23.641 "write_zeroes": true, 00:10:23.641 "zcopy": true, 00:10:23.641 "get_zone_info": false, 00:10:23.641 "zone_management": false, 00:10:23.641 "zone_append": false, 00:10:23.641 "compare": false, 00:10:23.641 "compare_and_write": false, 00:10:23.641 "abort": true, 00:10:23.641 "seek_hole": false, 00:10:23.641 "seek_data": false, 00:10:23.641 "copy": true, 00:10:23.641 "nvme_iov_md": false 00:10:23.641 }, 00:10:23.641 "memory_domains": [ 00:10:23.641 { 00:10:23.641 "dma_device_id": "system", 00:10:23.641 "dma_device_type": 1 00:10:23.641 }, 00:10:23.641 { 00:10:23.641 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:23.641 "dma_device_type": 2 00:10:23.641 } 00:10:23.641 ], 00:10:23.641 "driver_specific": {} 00:10:23.641 } 00:10:23.641 ] 00:10:23.642 15:18:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.642 15:18:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:23.642 15:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:23.642 15:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:23.642 15:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:23.642 15:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:23.642 15:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:23.642 15:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:23.642 15:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:23.642 15:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:23.642 15:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:23.642 15:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:23.642 15:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:23.642 15:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:23.642 15:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:23.642 15:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:23.642 15:18:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.642 15:18:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:23.642 15:18:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.642 15:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:23.642 "name": "Existed_Raid", 00:10:23.642 "uuid": "49e752de-93be-4adc-9f77-8172b268b043", 00:10:23.642 "strip_size_kb": 64, 00:10:23.642 "state": "configuring", 00:10:23.642 "raid_level": "raid0", 00:10:23.642 "superblock": true, 00:10:23.642 "num_base_bdevs": 4, 00:10:23.642 "num_base_bdevs_discovered": 2, 00:10:23.642 "num_base_bdevs_operational": 4, 00:10:23.642 "base_bdevs_list": [ 00:10:23.642 { 00:10:23.642 "name": "BaseBdev1", 00:10:23.642 "uuid": "f7e7204a-beb2-451f-b5de-7e39e7d03499", 00:10:23.642 "is_configured": true, 00:10:23.642 "data_offset": 2048, 00:10:23.642 "data_size": 63488 00:10:23.642 }, 00:10:23.642 { 00:10:23.642 "name": "BaseBdev2", 00:10:23.642 "uuid": "eb4a12d2-7e46-4a17-8d56-97b4a902faa7", 00:10:23.642 "is_configured": true, 00:10:23.642 "data_offset": 2048, 00:10:23.642 "data_size": 63488 00:10:23.642 }, 00:10:23.642 { 00:10:23.642 "name": "BaseBdev3", 00:10:23.642 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:23.642 "is_configured": false, 00:10:23.642 "data_offset": 0, 00:10:23.642 "data_size": 0 00:10:23.642 }, 00:10:23.642 { 00:10:23.642 "name": "BaseBdev4", 00:10:23.642 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:23.642 "is_configured": false, 00:10:23.642 "data_offset": 0, 00:10:23.642 "data_size": 0 00:10:23.642 } 00:10:23.642 ] 00:10:23.642 }' 00:10:23.642 15:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:23.642 15:18:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:24.211 15:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:24.211 15:18:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.211 15:18:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:24.211 [2024-11-20 15:18:10.529776] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:24.211 BaseBdev3 00:10:24.211 15:18:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.211 15:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:24.211 15:18:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:24.211 15:18:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:24.211 15:18:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:24.211 15:18:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:24.211 15:18:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:24.211 15:18:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:24.211 15:18:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.211 15:18:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:24.211 15:18:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.211 15:18:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:24.211 15:18:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.211 15:18:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:24.211 [ 00:10:24.211 { 00:10:24.211 "name": "BaseBdev3", 00:10:24.211 "aliases": [ 00:10:24.211 "7318996c-3d72-4857-9a29-75a0d8c5face" 00:10:24.211 ], 00:10:24.212 "product_name": "Malloc disk", 00:10:24.212 "block_size": 512, 00:10:24.212 "num_blocks": 65536, 00:10:24.212 "uuid": "7318996c-3d72-4857-9a29-75a0d8c5face", 00:10:24.212 "assigned_rate_limits": { 00:10:24.212 "rw_ios_per_sec": 0, 00:10:24.212 "rw_mbytes_per_sec": 0, 00:10:24.212 "r_mbytes_per_sec": 0, 00:10:24.212 "w_mbytes_per_sec": 0 00:10:24.212 }, 00:10:24.212 "claimed": true, 00:10:24.212 "claim_type": "exclusive_write", 00:10:24.212 "zoned": false, 00:10:24.212 "supported_io_types": { 00:10:24.212 "read": true, 00:10:24.212 "write": true, 00:10:24.212 "unmap": true, 00:10:24.212 "flush": true, 00:10:24.212 "reset": true, 00:10:24.212 "nvme_admin": false, 00:10:24.212 "nvme_io": false, 00:10:24.212 "nvme_io_md": false, 00:10:24.212 "write_zeroes": true, 00:10:24.212 "zcopy": true, 00:10:24.212 "get_zone_info": false, 00:10:24.212 "zone_management": false, 00:10:24.212 "zone_append": false, 00:10:24.212 "compare": false, 00:10:24.212 "compare_and_write": false, 00:10:24.212 "abort": true, 00:10:24.212 "seek_hole": false, 00:10:24.212 "seek_data": false, 00:10:24.212 "copy": true, 00:10:24.212 "nvme_iov_md": false 00:10:24.212 }, 00:10:24.212 "memory_domains": [ 00:10:24.212 { 00:10:24.212 "dma_device_id": "system", 00:10:24.212 "dma_device_type": 1 00:10:24.212 }, 00:10:24.212 { 00:10:24.212 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:24.212 "dma_device_type": 2 00:10:24.212 } 00:10:24.212 ], 00:10:24.212 "driver_specific": {} 00:10:24.212 } 00:10:24.212 ] 00:10:24.212 15:18:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.212 15:18:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:24.212 15:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:24.212 15:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:24.212 15:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:24.212 15:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:24.212 15:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:24.212 15:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:24.212 15:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:24.212 15:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:24.212 15:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:24.212 15:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:24.212 15:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:24.212 15:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:24.212 15:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:24.212 15:18:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.212 15:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:24.212 15:18:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:24.212 15:18:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.212 15:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:24.212 "name": "Existed_Raid", 00:10:24.212 "uuid": "49e752de-93be-4adc-9f77-8172b268b043", 00:10:24.212 "strip_size_kb": 64, 00:10:24.212 "state": "configuring", 00:10:24.212 "raid_level": "raid0", 00:10:24.212 "superblock": true, 00:10:24.212 "num_base_bdevs": 4, 00:10:24.212 "num_base_bdevs_discovered": 3, 00:10:24.212 "num_base_bdevs_operational": 4, 00:10:24.212 "base_bdevs_list": [ 00:10:24.212 { 00:10:24.212 "name": "BaseBdev1", 00:10:24.212 "uuid": "f7e7204a-beb2-451f-b5de-7e39e7d03499", 00:10:24.212 "is_configured": true, 00:10:24.212 "data_offset": 2048, 00:10:24.212 "data_size": 63488 00:10:24.212 }, 00:10:24.212 { 00:10:24.212 "name": "BaseBdev2", 00:10:24.212 "uuid": "eb4a12d2-7e46-4a17-8d56-97b4a902faa7", 00:10:24.212 "is_configured": true, 00:10:24.212 "data_offset": 2048, 00:10:24.212 "data_size": 63488 00:10:24.212 }, 00:10:24.212 { 00:10:24.212 "name": "BaseBdev3", 00:10:24.212 "uuid": "7318996c-3d72-4857-9a29-75a0d8c5face", 00:10:24.212 "is_configured": true, 00:10:24.212 "data_offset": 2048, 00:10:24.212 "data_size": 63488 00:10:24.212 }, 00:10:24.212 { 00:10:24.212 "name": "BaseBdev4", 00:10:24.212 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:24.212 "is_configured": false, 00:10:24.212 "data_offset": 0, 00:10:24.212 "data_size": 0 00:10:24.212 } 00:10:24.212 ] 00:10:24.212 }' 00:10:24.212 15:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:24.212 15:18:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:24.781 15:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:24.781 15:18:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.781 15:18:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:24.781 [2024-11-20 15:18:11.003791] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:24.781 [2024-11-20 15:18:11.004036] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:24.781 [2024-11-20 15:18:11.004053] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:24.781 [2024-11-20 15:18:11.004333] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:24.781 BaseBdev4 00:10:24.781 [2024-11-20 15:18:11.004466] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:24.781 [2024-11-20 15:18:11.004479] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:24.781 [2024-11-20 15:18:11.004610] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:24.781 15:18:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.781 15:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:10:24.781 15:18:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:24.781 15:18:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:24.781 15:18:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:24.782 15:18:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:24.782 15:18:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:24.782 15:18:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:24.782 15:18:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.782 15:18:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:24.782 15:18:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.782 15:18:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:24.782 15:18:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.782 15:18:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:24.782 [ 00:10:24.782 { 00:10:24.782 "name": "BaseBdev4", 00:10:24.782 "aliases": [ 00:10:24.782 "010281e2-b775-4a70-9564-3168589b002f" 00:10:24.782 ], 00:10:24.782 "product_name": "Malloc disk", 00:10:24.782 "block_size": 512, 00:10:24.782 "num_blocks": 65536, 00:10:24.782 "uuid": "010281e2-b775-4a70-9564-3168589b002f", 00:10:24.782 "assigned_rate_limits": { 00:10:24.782 "rw_ios_per_sec": 0, 00:10:24.782 "rw_mbytes_per_sec": 0, 00:10:24.782 "r_mbytes_per_sec": 0, 00:10:24.782 "w_mbytes_per_sec": 0 00:10:24.782 }, 00:10:24.782 "claimed": true, 00:10:24.782 "claim_type": "exclusive_write", 00:10:24.782 "zoned": false, 00:10:24.782 "supported_io_types": { 00:10:24.782 "read": true, 00:10:24.782 "write": true, 00:10:24.782 "unmap": true, 00:10:24.782 "flush": true, 00:10:24.782 "reset": true, 00:10:24.782 "nvme_admin": false, 00:10:24.782 "nvme_io": false, 00:10:24.782 "nvme_io_md": false, 00:10:24.782 "write_zeroes": true, 00:10:24.782 "zcopy": true, 00:10:24.782 "get_zone_info": false, 00:10:24.782 "zone_management": false, 00:10:24.782 "zone_append": false, 00:10:24.782 "compare": false, 00:10:24.782 "compare_and_write": false, 00:10:24.782 "abort": true, 00:10:24.782 "seek_hole": false, 00:10:24.782 "seek_data": false, 00:10:24.782 "copy": true, 00:10:24.782 "nvme_iov_md": false 00:10:24.782 }, 00:10:24.782 "memory_domains": [ 00:10:24.782 { 00:10:24.782 "dma_device_id": "system", 00:10:24.782 "dma_device_type": 1 00:10:24.782 }, 00:10:24.782 { 00:10:24.782 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:24.782 "dma_device_type": 2 00:10:24.782 } 00:10:24.782 ], 00:10:24.782 "driver_specific": {} 00:10:24.782 } 00:10:24.782 ] 00:10:24.782 15:18:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.782 15:18:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:24.782 15:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:24.782 15:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:24.782 15:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:10:24.782 15:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:24.782 15:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:24.782 15:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:24.782 15:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:24.782 15:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:24.782 15:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:24.782 15:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:24.782 15:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:24.782 15:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:24.782 15:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:24.782 15:18:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.782 15:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:24.782 15:18:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:24.782 15:18:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.782 15:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:24.782 "name": "Existed_Raid", 00:10:24.782 "uuid": "49e752de-93be-4adc-9f77-8172b268b043", 00:10:24.782 "strip_size_kb": 64, 00:10:24.782 "state": "online", 00:10:24.782 "raid_level": "raid0", 00:10:24.782 "superblock": true, 00:10:24.782 "num_base_bdevs": 4, 00:10:24.782 "num_base_bdevs_discovered": 4, 00:10:24.782 "num_base_bdevs_operational": 4, 00:10:24.782 "base_bdevs_list": [ 00:10:24.782 { 00:10:24.782 "name": "BaseBdev1", 00:10:24.782 "uuid": "f7e7204a-beb2-451f-b5de-7e39e7d03499", 00:10:24.782 "is_configured": true, 00:10:24.782 "data_offset": 2048, 00:10:24.782 "data_size": 63488 00:10:24.782 }, 00:10:24.782 { 00:10:24.782 "name": "BaseBdev2", 00:10:24.782 "uuid": "eb4a12d2-7e46-4a17-8d56-97b4a902faa7", 00:10:24.782 "is_configured": true, 00:10:24.782 "data_offset": 2048, 00:10:24.782 "data_size": 63488 00:10:24.782 }, 00:10:24.782 { 00:10:24.782 "name": "BaseBdev3", 00:10:24.782 "uuid": "7318996c-3d72-4857-9a29-75a0d8c5face", 00:10:24.782 "is_configured": true, 00:10:24.782 "data_offset": 2048, 00:10:24.782 "data_size": 63488 00:10:24.782 }, 00:10:24.782 { 00:10:24.782 "name": "BaseBdev4", 00:10:24.782 "uuid": "010281e2-b775-4a70-9564-3168589b002f", 00:10:24.782 "is_configured": true, 00:10:24.782 "data_offset": 2048, 00:10:24.782 "data_size": 63488 00:10:24.782 } 00:10:24.782 ] 00:10:24.782 }' 00:10:24.782 15:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:24.782 15:18:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:25.042 15:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:25.042 15:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:25.042 15:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:25.042 15:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:25.042 15:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:25.042 15:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:25.042 15:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:25.042 15:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:25.042 15:18:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.042 15:18:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:25.042 [2024-11-20 15:18:11.431871] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:25.042 15:18:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.042 15:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:25.042 "name": "Existed_Raid", 00:10:25.042 "aliases": [ 00:10:25.042 "49e752de-93be-4adc-9f77-8172b268b043" 00:10:25.042 ], 00:10:25.042 "product_name": "Raid Volume", 00:10:25.042 "block_size": 512, 00:10:25.042 "num_blocks": 253952, 00:10:25.042 "uuid": "49e752de-93be-4adc-9f77-8172b268b043", 00:10:25.042 "assigned_rate_limits": { 00:10:25.042 "rw_ios_per_sec": 0, 00:10:25.042 "rw_mbytes_per_sec": 0, 00:10:25.042 "r_mbytes_per_sec": 0, 00:10:25.042 "w_mbytes_per_sec": 0 00:10:25.042 }, 00:10:25.042 "claimed": false, 00:10:25.042 "zoned": false, 00:10:25.042 "supported_io_types": { 00:10:25.042 "read": true, 00:10:25.042 "write": true, 00:10:25.042 "unmap": true, 00:10:25.042 "flush": true, 00:10:25.042 "reset": true, 00:10:25.042 "nvme_admin": false, 00:10:25.042 "nvme_io": false, 00:10:25.042 "nvme_io_md": false, 00:10:25.042 "write_zeroes": true, 00:10:25.042 "zcopy": false, 00:10:25.042 "get_zone_info": false, 00:10:25.042 "zone_management": false, 00:10:25.042 "zone_append": false, 00:10:25.042 "compare": false, 00:10:25.042 "compare_and_write": false, 00:10:25.042 "abort": false, 00:10:25.042 "seek_hole": false, 00:10:25.042 "seek_data": false, 00:10:25.042 "copy": false, 00:10:25.042 "nvme_iov_md": false 00:10:25.042 }, 00:10:25.042 "memory_domains": [ 00:10:25.042 { 00:10:25.042 "dma_device_id": "system", 00:10:25.042 "dma_device_type": 1 00:10:25.042 }, 00:10:25.042 { 00:10:25.042 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:25.042 "dma_device_type": 2 00:10:25.042 }, 00:10:25.042 { 00:10:25.042 "dma_device_id": "system", 00:10:25.042 "dma_device_type": 1 00:10:25.042 }, 00:10:25.043 { 00:10:25.043 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:25.043 "dma_device_type": 2 00:10:25.043 }, 00:10:25.043 { 00:10:25.043 "dma_device_id": "system", 00:10:25.043 "dma_device_type": 1 00:10:25.043 }, 00:10:25.043 { 00:10:25.043 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:25.043 "dma_device_type": 2 00:10:25.043 }, 00:10:25.043 { 00:10:25.043 "dma_device_id": "system", 00:10:25.043 "dma_device_type": 1 00:10:25.043 }, 00:10:25.043 { 00:10:25.043 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:25.043 "dma_device_type": 2 00:10:25.043 } 00:10:25.043 ], 00:10:25.043 "driver_specific": { 00:10:25.043 "raid": { 00:10:25.043 "uuid": "49e752de-93be-4adc-9f77-8172b268b043", 00:10:25.043 "strip_size_kb": 64, 00:10:25.043 "state": "online", 00:10:25.043 "raid_level": "raid0", 00:10:25.043 "superblock": true, 00:10:25.043 "num_base_bdevs": 4, 00:10:25.043 "num_base_bdevs_discovered": 4, 00:10:25.043 "num_base_bdevs_operational": 4, 00:10:25.043 "base_bdevs_list": [ 00:10:25.043 { 00:10:25.043 "name": "BaseBdev1", 00:10:25.043 "uuid": "f7e7204a-beb2-451f-b5de-7e39e7d03499", 00:10:25.043 "is_configured": true, 00:10:25.043 "data_offset": 2048, 00:10:25.043 "data_size": 63488 00:10:25.043 }, 00:10:25.043 { 00:10:25.043 "name": "BaseBdev2", 00:10:25.043 "uuid": "eb4a12d2-7e46-4a17-8d56-97b4a902faa7", 00:10:25.043 "is_configured": true, 00:10:25.043 "data_offset": 2048, 00:10:25.043 "data_size": 63488 00:10:25.043 }, 00:10:25.043 { 00:10:25.043 "name": "BaseBdev3", 00:10:25.043 "uuid": "7318996c-3d72-4857-9a29-75a0d8c5face", 00:10:25.043 "is_configured": true, 00:10:25.043 "data_offset": 2048, 00:10:25.043 "data_size": 63488 00:10:25.043 }, 00:10:25.043 { 00:10:25.043 "name": "BaseBdev4", 00:10:25.043 "uuid": "010281e2-b775-4a70-9564-3168589b002f", 00:10:25.043 "is_configured": true, 00:10:25.043 "data_offset": 2048, 00:10:25.043 "data_size": 63488 00:10:25.043 } 00:10:25.043 ] 00:10:25.043 } 00:10:25.043 } 00:10:25.043 }' 00:10:25.043 15:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:25.043 15:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:25.043 BaseBdev2 00:10:25.043 BaseBdev3 00:10:25.043 BaseBdev4' 00:10:25.043 15:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:25.302 15:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:25.302 15:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:25.302 15:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:25.302 15:18:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.302 15:18:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:25.303 15:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:25.303 15:18:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.303 15:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:25.303 15:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:25.303 15:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:25.303 15:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:25.303 15:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:25.303 15:18:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.303 15:18:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:25.303 15:18:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.303 15:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:25.303 15:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:25.303 15:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:25.303 15:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:25.303 15:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:25.303 15:18:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.303 15:18:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:25.303 15:18:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.303 15:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:25.303 15:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:25.303 15:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:25.303 15:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:25.303 15:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:25.303 15:18:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.303 15:18:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:25.303 15:18:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.303 15:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:25.303 15:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:25.303 15:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:25.303 15:18:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.303 15:18:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:25.303 [2024-11-20 15:18:11.691205] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:25.303 [2024-11-20 15:18:11.691356] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:25.303 [2024-11-20 15:18:11.691434] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:25.562 15:18:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.562 15:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:25.562 15:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:10:25.562 15:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:25.562 15:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:10:25.562 15:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:25.562 15:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:10:25.562 15:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:25.562 15:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:25.562 15:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:25.562 15:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:25.562 15:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:25.562 15:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:25.562 15:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:25.562 15:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:25.562 15:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:25.562 15:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:25.562 15:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:25.562 15:18:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.562 15:18:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:25.562 15:18:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.562 15:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:25.562 "name": "Existed_Raid", 00:10:25.562 "uuid": "49e752de-93be-4adc-9f77-8172b268b043", 00:10:25.562 "strip_size_kb": 64, 00:10:25.562 "state": "offline", 00:10:25.562 "raid_level": "raid0", 00:10:25.562 "superblock": true, 00:10:25.562 "num_base_bdevs": 4, 00:10:25.562 "num_base_bdevs_discovered": 3, 00:10:25.562 "num_base_bdevs_operational": 3, 00:10:25.562 "base_bdevs_list": [ 00:10:25.562 { 00:10:25.562 "name": null, 00:10:25.562 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:25.562 "is_configured": false, 00:10:25.562 "data_offset": 0, 00:10:25.562 "data_size": 63488 00:10:25.562 }, 00:10:25.562 { 00:10:25.562 "name": "BaseBdev2", 00:10:25.562 "uuid": "eb4a12d2-7e46-4a17-8d56-97b4a902faa7", 00:10:25.562 "is_configured": true, 00:10:25.562 "data_offset": 2048, 00:10:25.562 "data_size": 63488 00:10:25.562 }, 00:10:25.562 { 00:10:25.562 "name": "BaseBdev3", 00:10:25.562 "uuid": "7318996c-3d72-4857-9a29-75a0d8c5face", 00:10:25.562 "is_configured": true, 00:10:25.562 "data_offset": 2048, 00:10:25.562 "data_size": 63488 00:10:25.562 }, 00:10:25.562 { 00:10:25.562 "name": "BaseBdev4", 00:10:25.562 "uuid": "010281e2-b775-4a70-9564-3168589b002f", 00:10:25.562 "is_configured": true, 00:10:25.562 "data_offset": 2048, 00:10:25.562 "data_size": 63488 00:10:25.562 } 00:10:25.562 ] 00:10:25.562 }' 00:10:25.562 15:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:25.562 15:18:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:25.821 15:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:25.821 15:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:25.821 15:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:25.821 15:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:25.821 15:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.821 15:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:25.821 15:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.821 15:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:25.821 15:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:25.821 15:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:25.821 15:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.821 15:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:25.821 [2024-11-20 15:18:12.232623] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:26.080 15:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.080 15:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:26.080 15:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:26.080 15:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:26.080 15:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.080 15:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:26.080 15:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.080 15:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.080 15:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:26.080 15:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:26.080 15:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:26.080 15:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.080 15:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.080 [2024-11-20 15:18:12.385008] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:26.080 15:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.080 15:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:26.080 15:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:26.080 15:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:26.080 15:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.080 15:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:26.080 15:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.080 15:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.080 15:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:26.080 15:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:26.080 15:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:10:26.080 15:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.080 15:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.080 [2024-11-20 15:18:12.537760] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:10:26.080 [2024-11-20 15:18:12.537811] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:26.339 15:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.339 15:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:26.339 15:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:26.339 15:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:26.339 15:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.339 15:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:26.339 15:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.339 15:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.339 15:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:26.339 15:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:26.339 15:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:10:26.339 15:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:26.339 15:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:26.339 15:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:26.339 15:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.340 15:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.340 BaseBdev2 00:10:26.340 15:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.340 15:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:26.340 15:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:26.340 15:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:26.340 15:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:26.340 15:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:26.340 15:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:26.340 15:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:26.340 15:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.340 15:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.340 15:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.340 15:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:26.340 15:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.340 15:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.340 [ 00:10:26.340 { 00:10:26.340 "name": "BaseBdev2", 00:10:26.340 "aliases": [ 00:10:26.340 "c5280c76-35e0-4583-948c-9998d2f0f022" 00:10:26.340 ], 00:10:26.340 "product_name": "Malloc disk", 00:10:26.340 "block_size": 512, 00:10:26.340 "num_blocks": 65536, 00:10:26.340 "uuid": "c5280c76-35e0-4583-948c-9998d2f0f022", 00:10:26.340 "assigned_rate_limits": { 00:10:26.340 "rw_ios_per_sec": 0, 00:10:26.340 "rw_mbytes_per_sec": 0, 00:10:26.340 "r_mbytes_per_sec": 0, 00:10:26.340 "w_mbytes_per_sec": 0 00:10:26.340 }, 00:10:26.340 "claimed": false, 00:10:26.340 "zoned": false, 00:10:26.340 "supported_io_types": { 00:10:26.340 "read": true, 00:10:26.340 "write": true, 00:10:26.340 "unmap": true, 00:10:26.340 "flush": true, 00:10:26.340 "reset": true, 00:10:26.340 "nvme_admin": false, 00:10:26.340 "nvme_io": false, 00:10:26.340 "nvme_io_md": false, 00:10:26.340 "write_zeroes": true, 00:10:26.340 "zcopy": true, 00:10:26.340 "get_zone_info": false, 00:10:26.340 "zone_management": false, 00:10:26.340 "zone_append": false, 00:10:26.340 "compare": false, 00:10:26.340 "compare_and_write": false, 00:10:26.340 "abort": true, 00:10:26.340 "seek_hole": false, 00:10:26.340 "seek_data": false, 00:10:26.340 "copy": true, 00:10:26.340 "nvme_iov_md": false 00:10:26.340 }, 00:10:26.340 "memory_domains": [ 00:10:26.340 { 00:10:26.340 "dma_device_id": "system", 00:10:26.340 "dma_device_type": 1 00:10:26.340 }, 00:10:26.340 { 00:10:26.340 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:26.340 "dma_device_type": 2 00:10:26.340 } 00:10:26.340 ], 00:10:26.340 "driver_specific": {} 00:10:26.340 } 00:10:26.340 ] 00:10:26.340 15:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.340 15:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:26.340 15:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:26.340 15:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:26.340 15:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:26.340 15:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.340 15:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.600 BaseBdev3 00:10:26.600 15:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.600 15:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:26.600 15:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:26.600 15:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:26.600 15:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:26.600 15:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:26.600 15:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:26.600 15:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:26.600 15:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.600 15:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.600 15:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.600 15:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:26.600 15:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.600 15:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.600 [ 00:10:26.600 { 00:10:26.600 "name": "BaseBdev3", 00:10:26.600 "aliases": [ 00:10:26.600 "361a9c91-2af0-4101-9dce-75017b328bb3" 00:10:26.600 ], 00:10:26.600 "product_name": "Malloc disk", 00:10:26.600 "block_size": 512, 00:10:26.600 "num_blocks": 65536, 00:10:26.600 "uuid": "361a9c91-2af0-4101-9dce-75017b328bb3", 00:10:26.600 "assigned_rate_limits": { 00:10:26.600 "rw_ios_per_sec": 0, 00:10:26.600 "rw_mbytes_per_sec": 0, 00:10:26.600 "r_mbytes_per_sec": 0, 00:10:26.600 "w_mbytes_per_sec": 0 00:10:26.600 }, 00:10:26.600 "claimed": false, 00:10:26.600 "zoned": false, 00:10:26.600 "supported_io_types": { 00:10:26.600 "read": true, 00:10:26.600 "write": true, 00:10:26.600 "unmap": true, 00:10:26.600 "flush": true, 00:10:26.600 "reset": true, 00:10:26.600 "nvme_admin": false, 00:10:26.600 "nvme_io": false, 00:10:26.600 "nvme_io_md": false, 00:10:26.600 "write_zeroes": true, 00:10:26.600 "zcopy": true, 00:10:26.600 "get_zone_info": false, 00:10:26.600 "zone_management": false, 00:10:26.600 "zone_append": false, 00:10:26.600 "compare": false, 00:10:26.600 "compare_and_write": false, 00:10:26.600 "abort": true, 00:10:26.600 "seek_hole": false, 00:10:26.600 "seek_data": false, 00:10:26.600 "copy": true, 00:10:26.600 "nvme_iov_md": false 00:10:26.600 }, 00:10:26.600 "memory_domains": [ 00:10:26.600 { 00:10:26.600 "dma_device_id": "system", 00:10:26.600 "dma_device_type": 1 00:10:26.600 }, 00:10:26.600 { 00:10:26.600 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:26.600 "dma_device_type": 2 00:10:26.600 } 00:10:26.600 ], 00:10:26.600 "driver_specific": {} 00:10:26.600 } 00:10:26.600 ] 00:10:26.600 15:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.600 15:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:26.600 15:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:26.600 15:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:26.600 15:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:26.600 15:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.600 15:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.600 BaseBdev4 00:10:26.600 15:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.600 15:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:10:26.600 15:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:26.600 15:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:26.600 15:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:26.600 15:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:26.600 15:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:26.600 15:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:26.600 15:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.600 15:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.600 15:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.600 15:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:26.600 15:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.600 15:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.600 [ 00:10:26.600 { 00:10:26.600 "name": "BaseBdev4", 00:10:26.600 "aliases": [ 00:10:26.600 "aaaedf8f-8e29-4612-8b1c-0afb4d93b875" 00:10:26.600 ], 00:10:26.600 "product_name": "Malloc disk", 00:10:26.600 "block_size": 512, 00:10:26.600 "num_blocks": 65536, 00:10:26.600 "uuid": "aaaedf8f-8e29-4612-8b1c-0afb4d93b875", 00:10:26.600 "assigned_rate_limits": { 00:10:26.600 "rw_ios_per_sec": 0, 00:10:26.600 "rw_mbytes_per_sec": 0, 00:10:26.600 "r_mbytes_per_sec": 0, 00:10:26.600 "w_mbytes_per_sec": 0 00:10:26.600 }, 00:10:26.600 "claimed": false, 00:10:26.600 "zoned": false, 00:10:26.600 "supported_io_types": { 00:10:26.600 "read": true, 00:10:26.600 "write": true, 00:10:26.600 "unmap": true, 00:10:26.600 "flush": true, 00:10:26.600 "reset": true, 00:10:26.601 "nvme_admin": false, 00:10:26.601 "nvme_io": false, 00:10:26.601 "nvme_io_md": false, 00:10:26.601 "write_zeroes": true, 00:10:26.601 "zcopy": true, 00:10:26.601 "get_zone_info": false, 00:10:26.601 "zone_management": false, 00:10:26.601 "zone_append": false, 00:10:26.601 "compare": false, 00:10:26.601 "compare_and_write": false, 00:10:26.601 "abort": true, 00:10:26.601 "seek_hole": false, 00:10:26.601 "seek_data": false, 00:10:26.601 "copy": true, 00:10:26.601 "nvme_iov_md": false 00:10:26.601 }, 00:10:26.601 "memory_domains": [ 00:10:26.601 { 00:10:26.601 "dma_device_id": "system", 00:10:26.601 "dma_device_type": 1 00:10:26.601 }, 00:10:26.601 { 00:10:26.601 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:26.601 "dma_device_type": 2 00:10:26.601 } 00:10:26.601 ], 00:10:26.601 "driver_specific": {} 00:10:26.601 } 00:10:26.601 ] 00:10:26.601 15:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.601 15:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:26.601 15:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:26.601 15:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:26.601 15:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:26.601 15:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.601 15:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.601 [2024-11-20 15:18:12.954378] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:26.601 [2024-11-20 15:18:12.954425] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:26.601 [2024-11-20 15:18:12.954448] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:26.601 [2024-11-20 15:18:12.956548] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:26.601 [2024-11-20 15:18:12.956599] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:26.601 15:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.601 15:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:26.601 15:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:26.601 15:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:26.601 15:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:26.601 15:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:26.601 15:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:26.601 15:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:26.601 15:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:26.601 15:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:26.601 15:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:26.601 15:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:26.601 15:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.601 15:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.601 15:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:26.601 15:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.601 15:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:26.601 "name": "Existed_Raid", 00:10:26.601 "uuid": "2edf9928-4105-4d8c-85b8-3441108511dd", 00:10:26.601 "strip_size_kb": 64, 00:10:26.601 "state": "configuring", 00:10:26.601 "raid_level": "raid0", 00:10:26.601 "superblock": true, 00:10:26.601 "num_base_bdevs": 4, 00:10:26.601 "num_base_bdevs_discovered": 3, 00:10:26.601 "num_base_bdevs_operational": 4, 00:10:26.601 "base_bdevs_list": [ 00:10:26.601 { 00:10:26.601 "name": "BaseBdev1", 00:10:26.601 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:26.601 "is_configured": false, 00:10:26.601 "data_offset": 0, 00:10:26.601 "data_size": 0 00:10:26.601 }, 00:10:26.601 { 00:10:26.601 "name": "BaseBdev2", 00:10:26.601 "uuid": "c5280c76-35e0-4583-948c-9998d2f0f022", 00:10:26.601 "is_configured": true, 00:10:26.601 "data_offset": 2048, 00:10:26.601 "data_size": 63488 00:10:26.601 }, 00:10:26.601 { 00:10:26.601 "name": "BaseBdev3", 00:10:26.601 "uuid": "361a9c91-2af0-4101-9dce-75017b328bb3", 00:10:26.601 "is_configured": true, 00:10:26.601 "data_offset": 2048, 00:10:26.601 "data_size": 63488 00:10:26.601 }, 00:10:26.601 { 00:10:26.601 "name": "BaseBdev4", 00:10:26.601 "uuid": "aaaedf8f-8e29-4612-8b1c-0afb4d93b875", 00:10:26.601 "is_configured": true, 00:10:26.601 "data_offset": 2048, 00:10:26.601 "data_size": 63488 00:10:26.601 } 00:10:26.601 ] 00:10:26.601 }' 00:10:26.601 15:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:26.601 15:18:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:27.197 15:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:27.197 15:18:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.197 15:18:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:27.197 [2024-11-20 15:18:13.381813] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:27.197 15:18:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.197 15:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:27.197 15:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:27.197 15:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:27.197 15:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:27.197 15:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:27.197 15:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:27.197 15:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:27.197 15:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:27.197 15:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:27.197 15:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:27.197 15:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:27.197 15:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:27.197 15:18:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.197 15:18:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:27.197 15:18:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.197 15:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:27.197 "name": "Existed_Raid", 00:10:27.197 "uuid": "2edf9928-4105-4d8c-85b8-3441108511dd", 00:10:27.197 "strip_size_kb": 64, 00:10:27.197 "state": "configuring", 00:10:27.197 "raid_level": "raid0", 00:10:27.197 "superblock": true, 00:10:27.197 "num_base_bdevs": 4, 00:10:27.197 "num_base_bdevs_discovered": 2, 00:10:27.197 "num_base_bdevs_operational": 4, 00:10:27.197 "base_bdevs_list": [ 00:10:27.197 { 00:10:27.197 "name": "BaseBdev1", 00:10:27.197 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:27.197 "is_configured": false, 00:10:27.197 "data_offset": 0, 00:10:27.197 "data_size": 0 00:10:27.197 }, 00:10:27.197 { 00:10:27.197 "name": null, 00:10:27.197 "uuid": "c5280c76-35e0-4583-948c-9998d2f0f022", 00:10:27.197 "is_configured": false, 00:10:27.197 "data_offset": 0, 00:10:27.197 "data_size": 63488 00:10:27.197 }, 00:10:27.197 { 00:10:27.197 "name": "BaseBdev3", 00:10:27.197 "uuid": "361a9c91-2af0-4101-9dce-75017b328bb3", 00:10:27.197 "is_configured": true, 00:10:27.197 "data_offset": 2048, 00:10:27.197 "data_size": 63488 00:10:27.197 }, 00:10:27.197 { 00:10:27.197 "name": "BaseBdev4", 00:10:27.197 "uuid": "aaaedf8f-8e29-4612-8b1c-0afb4d93b875", 00:10:27.197 "is_configured": true, 00:10:27.197 "data_offset": 2048, 00:10:27.197 "data_size": 63488 00:10:27.197 } 00:10:27.197 ] 00:10:27.197 }' 00:10:27.197 15:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:27.197 15:18:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:27.455 15:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:27.455 15:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:27.455 15:18:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.455 15:18:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:27.455 15:18:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.455 15:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:27.455 15:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:27.455 15:18:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.455 15:18:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:27.455 [2024-11-20 15:18:13.906248] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:27.455 BaseBdev1 00:10:27.455 15:18:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.455 15:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:27.455 15:18:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:27.455 15:18:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:27.455 15:18:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:27.455 15:18:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:27.455 15:18:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:27.455 15:18:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:27.455 15:18:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.455 15:18:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:27.455 15:18:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.455 15:18:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:27.455 15:18:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.455 15:18:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:27.455 [ 00:10:27.455 { 00:10:27.455 "name": "BaseBdev1", 00:10:27.455 "aliases": [ 00:10:27.455 "d8b6a600-03bc-4ad8-964f-b6fa89d7e08f" 00:10:27.455 ], 00:10:27.455 "product_name": "Malloc disk", 00:10:27.455 "block_size": 512, 00:10:27.455 "num_blocks": 65536, 00:10:27.455 "uuid": "d8b6a600-03bc-4ad8-964f-b6fa89d7e08f", 00:10:27.455 "assigned_rate_limits": { 00:10:27.714 "rw_ios_per_sec": 0, 00:10:27.714 "rw_mbytes_per_sec": 0, 00:10:27.714 "r_mbytes_per_sec": 0, 00:10:27.714 "w_mbytes_per_sec": 0 00:10:27.714 }, 00:10:27.714 "claimed": true, 00:10:27.714 "claim_type": "exclusive_write", 00:10:27.714 "zoned": false, 00:10:27.714 "supported_io_types": { 00:10:27.714 "read": true, 00:10:27.714 "write": true, 00:10:27.714 "unmap": true, 00:10:27.714 "flush": true, 00:10:27.714 "reset": true, 00:10:27.714 "nvme_admin": false, 00:10:27.714 "nvme_io": false, 00:10:27.714 "nvme_io_md": false, 00:10:27.714 "write_zeroes": true, 00:10:27.714 "zcopy": true, 00:10:27.714 "get_zone_info": false, 00:10:27.714 "zone_management": false, 00:10:27.714 "zone_append": false, 00:10:27.714 "compare": false, 00:10:27.714 "compare_and_write": false, 00:10:27.714 "abort": true, 00:10:27.714 "seek_hole": false, 00:10:27.714 "seek_data": false, 00:10:27.714 "copy": true, 00:10:27.714 "nvme_iov_md": false 00:10:27.714 }, 00:10:27.714 "memory_domains": [ 00:10:27.714 { 00:10:27.714 "dma_device_id": "system", 00:10:27.714 "dma_device_type": 1 00:10:27.714 }, 00:10:27.714 { 00:10:27.714 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:27.714 "dma_device_type": 2 00:10:27.714 } 00:10:27.714 ], 00:10:27.714 "driver_specific": {} 00:10:27.714 } 00:10:27.714 ] 00:10:27.714 15:18:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.714 15:18:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:27.714 15:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:27.714 15:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:27.714 15:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:27.714 15:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:27.714 15:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:27.714 15:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:27.714 15:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:27.714 15:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:27.714 15:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:27.714 15:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:27.714 15:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:27.714 15:18:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.714 15:18:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:27.714 15:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:27.714 15:18:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.714 15:18:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:27.714 "name": "Existed_Raid", 00:10:27.714 "uuid": "2edf9928-4105-4d8c-85b8-3441108511dd", 00:10:27.714 "strip_size_kb": 64, 00:10:27.714 "state": "configuring", 00:10:27.714 "raid_level": "raid0", 00:10:27.714 "superblock": true, 00:10:27.714 "num_base_bdevs": 4, 00:10:27.714 "num_base_bdevs_discovered": 3, 00:10:27.714 "num_base_bdevs_operational": 4, 00:10:27.714 "base_bdevs_list": [ 00:10:27.714 { 00:10:27.714 "name": "BaseBdev1", 00:10:27.714 "uuid": "d8b6a600-03bc-4ad8-964f-b6fa89d7e08f", 00:10:27.714 "is_configured": true, 00:10:27.714 "data_offset": 2048, 00:10:27.714 "data_size": 63488 00:10:27.714 }, 00:10:27.714 { 00:10:27.714 "name": null, 00:10:27.714 "uuid": "c5280c76-35e0-4583-948c-9998d2f0f022", 00:10:27.714 "is_configured": false, 00:10:27.714 "data_offset": 0, 00:10:27.714 "data_size": 63488 00:10:27.714 }, 00:10:27.714 { 00:10:27.714 "name": "BaseBdev3", 00:10:27.714 "uuid": "361a9c91-2af0-4101-9dce-75017b328bb3", 00:10:27.714 "is_configured": true, 00:10:27.714 "data_offset": 2048, 00:10:27.714 "data_size": 63488 00:10:27.714 }, 00:10:27.714 { 00:10:27.714 "name": "BaseBdev4", 00:10:27.714 "uuid": "aaaedf8f-8e29-4612-8b1c-0afb4d93b875", 00:10:27.714 "is_configured": true, 00:10:27.714 "data_offset": 2048, 00:10:27.714 "data_size": 63488 00:10:27.714 } 00:10:27.714 ] 00:10:27.714 }' 00:10:27.714 15:18:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:27.714 15:18:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:27.974 15:18:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:27.974 15:18:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.974 15:18:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:27.974 15:18:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:27.974 15:18:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.974 15:18:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:27.974 15:18:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:27.974 15:18:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.974 15:18:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:27.974 [2024-11-20 15:18:14.409815] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:27.974 15:18:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.974 15:18:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:27.974 15:18:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:27.974 15:18:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:27.974 15:18:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:27.974 15:18:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:27.974 15:18:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:27.974 15:18:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:27.974 15:18:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:27.974 15:18:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:27.974 15:18:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:27.974 15:18:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:27.974 15:18:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:27.974 15:18:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.974 15:18:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:27.975 15:18:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.234 15:18:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:28.234 "name": "Existed_Raid", 00:10:28.234 "uuid": "2edf9928-4105-4d8c-85b8-3441108511dd", 00:10:28.234 "strip_size_kb": 64, 00:10:28.234 "state": "configuring", 00:10:28.234 "raid_level": "raid0", 00:10:28.234 "superblock": true, 00:10:28.234 "num_base_bdevs": 4, 00:10:28.234 "num_base_bdevs_discovered": 2, 00:10:28.234 "num_base_bdevs_operational": 4, 00:10:28.234 "base_bdevs_list": [ 00:10:28.234 { 00:10:28.234 "name": "BaseBdev1", 00:10:28.234 "uuid": "d8b6a600-03bc-4ad8-964f-b6fa89d7e08f", 00:10:28.234 "is_configured": true, 00:10:28.234 "data_offset": 2048, 00:10:28.234 "data_size": 63488 00:10:28.234 }, 00:10:28.234 { 00:10:28.234 "name": null, 00:10:28.234 "uuid": "c5280c76-35e0-4583-948c-9998d2f0f022", 00:10:28.234 "is_configured": false, 00:10:28.234 "data_offset": 0, 00:10:28.234 "data_size": 63488 00:10:28.234 }, 00:10:28.234 { 00:10:28.234 "name": null, 00:10:28.234 "uuid": "361a9c91-2af0-4101-9dce-75017b328bb3", 00:10:28.234 "is_configured": false, 00:10:28.234 "data_offset": 0, 00:10:28.234 "data_size": 63488 00:10:28.234 }, 00:10:28.234 { 00:10:28.234 "name": "BaseBdev4", 00:10:28.234 "uuid": "aaaedf8f-8e29-4612-8b1c-0afb4d93b875", 00:10:28.234 "is_configured": true, 00:10:28.234 "data_offset": 2048, 00:10:28.234 "data_size": 63488 00:10:28.234 } 00:10:28.234 ] 00:10:28.234 }' 00:10:28.234 15:18:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:28.234 15:18:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:28.493 15:18:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:28.493 15:18:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.493 15:18:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:28.493 15:18:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:28.493 15:18:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.493 15:18:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:28.493 15:18:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:28.493 15:18:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.493 15:18:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:28.493 [2024-11-20 15:18:14.901523] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:28.493 15:18:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.493 15:18:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:28.493 15:18:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:28.493 15:18:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:28.493 15:18:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:28.493 15:18:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:28.493 15:18:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:28.493 15:18:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:28.493 15:18:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:28.493 15:18:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:28.493 15:18:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:28.493 15:18:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:28.493 15:18:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.493 15:18:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:28.493 15:18:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:28.493 15:18:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.493 15:18:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:28.493 "name": "Existed_Raid", 00:10:28.493 "uuid": "2edf9928-4105-4d8c-85b8-3441108511dd", 00:10:28.493 "strip_size_kb": 64, 00:10:28.493 "state": "configuring", 00:10:28.493 "raid_level": "raid0", 00:10:28.493 "superblock": true, 00:10:28.493 "num_base_bdevs": 4, 00:10:28.493 "num_base_bdevs_discovered": 3, 00:10:28.493 "num_base_bdevs_operational": 4, 00:10:28.493 "base_bdevs_list": [ 00:10:28.493 { 00:10:28.493 "name": "BaseBdev1", 00:10:28.493 "uuid": "d8b6a600-03bc-4ad8-964f-b6fa89d7e08f", 00:10:28.493 "is_configured": true, 00:10:28.493 "data_offset": 2048, 00:10:28.493 "data_size": 63488 00:10:28.493 }, 00:10:28.493 { 00:10:28.493 "name": null, 00:10:28.493 "uuid": "c5280c76-35e0-4583-948c-9998d2f0f022", 00:10:28.493 "is_configured": false, 00:10:28.493 "data_offset": 0, 00:10:28.493 "data_size": 63488 00:10:28.493 }, 00:10:28.493 { 00:10:28.493 "name": "BaseBdev3", 00:10:28.493 "uuid": "361a9c91-2af0-4101-9dce-75017b328bb3", 00:10:28.493 "is_configured": true, 00:10:28.493 "data_offset": 2048, 00:10:28.493 "data_size": 63488 00:10:28.493 }, 00:10:28.493 { 00:10:28.493 "name": "BaseBdev4", 00:10:28.493 "uuid": "aaaedf8f-8e29-4612-8b1c-0afb4d93b875", 00:10:28.493 "is_configured": true, 00:10:28.493 "data_offset": 2048, 00:10:28.493 "data_size": 63488 00:10:28.493 } 00:10:28.493 ] 00:10:28.493 }' 00:10:28.493 15:18:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:28.493 15:18:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:29.062 15:18:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:29.062 15:18:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:29.062 15:18:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.062 15:18:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:29.062 15:18:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.062 15:18:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:29.062 15:18:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:29.062 15:18:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.062 15:18:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:29.062 [2024-11-20 15:18:15.348901] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:29.062 15:18:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.062 15:18:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:29.062 15:18:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:29.062 15:18:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:29.062 15:18:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:29.062 15:18:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:29.062 15:18:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:29.062 15:18:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:29.062 15:18:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:29.062 15:18:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:29.062 15:18:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:29.062 15:18:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:29.062 15:18:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:29.062 15:18:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.062 15:18:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:29.062 15:18:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.062 15:18:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:29.062 "name": "Existed_Raid", 00:10:29.062 "uuid": "2edf9928-4105-4d8c-85b8-3441108511dd", 00:10:29.062 "strip_size_kb": 64, 00:10:29.062 "state": "configuring", 00:10:29.062 "raid_level": "raid0", 00:10:29.062 "superblock": true, 00:10:29.062 "num_base_bdevs": 4, 00:10:29.062 "num_base_bdevs_discovered": 2, 00:10:29.062 "num_base_bdevs_operational": 4, 00:10:29.062 "base_bdevs_list": [ 00:10:29.062 { 00:10:29.062 "name": null, 00:10:29.062 "uuid": "d8b6a600-03bc-4ad8-964f-b6fa89d7e08f", 00:10:29.062 "is_configured": false, 00:10:29.062 "data_offset": 0, 00:10:29.062 "data_size": 63488 00:10:29.062 }, 00:10:29.062 { 00:10:29.062 "name": null, 00:10:29.062 "uuid": "c5280c76-35e0-4583-948c-9998d2f0f022", 00:10:29.062 "is_configured": false, 00:10:29.062 "data_offset": 0, 00:10:29.062 "data_size": 63488 00:10:29.062 }, 00:10:29.062 { 00:10:29.062 "name": "BaseBdev3", 00:10:29.062 "uuid": "361a9c91-2af0-4101-9dce-75017b328bb3", 00:10:29.062 "is_configured": true, 00:10:29.062 "data_offset": 2048, 00:10:29.062 "data_size": 63488 00:10:29.062 }, 00:10:29.062 { 00:10:29.062 "name": "BaseBdev4", 00:10:29.062 "uuid": "aaaedf8f-8e29-4612-8b1c-0afb4d93b875", 00:10:29.062 "is_configured": true, 00:10:29.062 "data_offset": 2048, 00:10:29.062 "data_size": 63488 00:10:29.062 } 00:10:29.062 ] 00:10:29.062 }' 00:10:29.062 15:18:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:29.062 15:18:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:29.630 15:18:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:29.630 15:18:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:29.630 15:18:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.630 15:18:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:29.630 15:18:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.631 15:18:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:29.631 15:18:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:29.631 15:18:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.631 15:18:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:29.631 [2024-11-20 15:18:15.876844] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:29.631 15:18:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.631 15:18:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:29.631 15:18:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:29.631 15:18:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:29.631 15:18:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:29.631 15:18:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:29.631 15:18:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:29.631 15:18:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:29.631 15:18:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:29.631 15:18:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:29.631 15:18:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:29.631 15:18:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:29.631 15:18:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.631 15:18:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:29.631 15:18:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:29.631 15:18:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.631 15:18:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:29.631 "name": "Existed_Raid", 00:10:29.631 "uuid": "2edf9928-4105-4d8c-85b8-3441108511dd", 00:10:29.631 "strip_size_kb": 64, 00:10:29.631 "state": "configuring", 00:10:29.631 "raid_level": "raid0", 00:10:29.631 "superblock": true, 00:10:29.631 "num_base_bdevs": 4, 00:10:29.631 "num_base_bdevs_discovered": 3, 00:10:29.631 "num_base_bdevs_operational": 4, 00:10:29.631 "base_bdevs_list": [ 00:10:29.631 { 00:10:29.631 "name": null, 00:10:29.631 "uuid": "d8b6a600-03bc-4ad8-964f-b6fa89d7e08f", 00:10:29.631 "is_configured": false, 00:10:29.631 "data_offset": 0, 00:10:29.631 "data_size": 63488 00:10:29.631 }, 00:10:29.631 { 00:10:29.631 "name": "BaseBdev2", 00:10:29.631 "uuid": "c5280c76-35e0-4583-948c-9998d2f0f022", 00:10:29.631 "is_configured": true, 00:10:29.631 "data_offset": 2048, 00:10:29.631 "data_size": 63488 00:10:29.631 }, 00:10:29.631 { 00:10:29.631 "name": "BaseBdev3", 00:10:29.631 "uuid": "361a9c91-2af0-4101-9dce-75017b328bb3", 00:10:29.631 "is_configured": true, 00:10:29.631 "data_offset": 2048, 00:10:29.631 "data_size": 63488 00:10:29.631 }, 00:10:29.631 { 00:10:29.631 "name": "BaseBdev4", 00:10:29.631 "uuid": "aaaedf8f-8e29-4612-8b1c-0afb4d93b875", 00:10:29.631 "is_configured": true, 00:10:29.631 "data_offset": 2048, 00:10:29.631 "data_size": 63488 00:10:29.631 } 00:10:29.631 ] 00:10:29.631 }' 00:10:29.631 15:18:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:29.631 15:18:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:29.890 15:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:29.890 15:18:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.890 15:18:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:29.890 15:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:29.890 15:18:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.150 15:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:30.150 15:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:30.150 15:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:30.150 15:18:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.150 15:18:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.150 15:18:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.150 15:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u d8b6a600-03bc-4ad8-964f-b6fa89d7e08f 00:10:30.150 15:18:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.150 15:18:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.150 [2024-11-20 15:18:16.451550] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:30.150 [2024-11-20 15:18:16.451838] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:30.150 [2024-11-20 15:18:16.451854] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:30.150 [2024-11-20 15:18:16.452137] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:10:30.150 NewBaseBdev 00:10:30.150 [2024-11-20 15:18:16.452289] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:30.150 [2024-11-20 15:18:16.452307] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:10:30.150 [2024-11-20 15:18:16.452445] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:30.150 15:18:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.150 15:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:30.150 15:18:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:10:30.151 15:18:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:30.151 15:18:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:30.151 15:18:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:30.151 15:18:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:30.151 15:18:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:30.151 15:18:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.151 15:18:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.151 15:18:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.151 15:18:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:30.151 15:18:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.151 15:18:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.151 [ 00:10:30.151 { 00:10:30.151 "name": "NewBaseBdev", 00:10:30.151 "aliases": [ 00:10:30.151 "d8b6a600-03bc-4ad8-964f-b6fa89d7e08f" 00:10:30.151 ], 00:10:30.151 "product_name": "Malloc disk", 00:10:30.151 "block_size": 512, 00:10:30.151 "num_blocks": 65536, 00:10:30.151 "uuid": "d8b6a600-03bc-4ad8-964f-b6fa89d7e08f", 00:10:30.151 "assigned_rate_limits": { 00:10:30.151 "rw_ios_per_sec": 0, 00:10:30.151 "rw_mbytes_per_sec": 0, 00:10:30.151 "r_mbytes_per_sec": 0, 00:10:30.151 "w_mbytes_per_sec": 0 00:10:30.151 }, 00:10:30.151 "claimed": true, 00:10:30.151 "claim_type": "exclusive_write", 00:10:30.151 "zoned": false, 00:10:30.151 "supported_io_types": { 00:10:30.151 "read": true, 00:10:30.151 "write": true, 00:10:30.151 "unmap": true, 00:10:30.151 "flush": true, 00:10:30.151 "reset": true, 00:10:30.151 "nvme_admin": false, 00:10:30.151 "nvme_io": false, 00:10:30.151 "nvme_io_md": false, 00:10:30.151 "write_zeroes": true, 00:10:30.151 "zcopy": true, 00:10:30.151 "get_zone_info": false, 00:10:30.151 "zone_management": false, 00:10:30.151 "zone_append": false, 00:10:30.151 "compare": false, 00:10:30.151 "compare_and_write": false, 00:10:30.151 "abort": true, 00:10:30.151 "seek_hole": false, 00:10:30.151 "seek_data": false, 00:10:30.151 "copy": true, 00:10:30.151 "nvme_iov_md": false 00:10:30.151 }, 00:10:30.151 "memory_domains": [ 00:10:30.151 { 00:10:30.151 "dma_device_id": "system", 00:10:30.151 "dma_device_type": 1 00:10:30.151 }, 00:10:30.151 { 00:10:30.151 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:30.151 "dma_device_type": 2 00:10:30.151 } 00:10:30.151 ], 00:10:30.151 "driver_specific": {} 00:10:30.151 } 00:10:30.151 ] 00:10:30.151 15:18:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.151 15:18:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:30.151 15:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:10:30.151 15:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:30.151 15:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:30.151 15:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:30.151 15:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:30.151 15:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:30.151 15:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:30.151 15:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:30.151 15:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:30.151 15:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:30.151 15:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:30.151 15:18:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.151 15:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:30.151 15:18:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.151 15:18:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.151 15:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:30.151 "name": "Existed_Raid", 00:10:30.151 "uuid": "2edf9928-4105-4d8c-85b8-3441108511dd", 00:10:30.151 "strip_size_kb": 64, 00:10:30.151 "state": "online", 00:10:30.151 "raid_level": "raid0", 00:10:30.151 "superblock": true, 00:10:30.151 "num_base_bdevs": 4, 00:10:30.151 "num_base_bdevs_discovered": 4, 00:10:30.151 "num_base_bdevs_operational": 4, 00:10:30.151 "base_bdevs_list": [ 00:10:30.151 { 00:10:30.151 "name": "NewBaseBdev", 00:10:30.151 "uuid": "d8b6a600-03bc-4ad8-964f-b6fa89d7e08f", 00:10:30.151 "is_configured": true, 00:10:30.151 "data_offset": 2048, 00:10:30.151 "data_size": 63488 00:10:30.151 }, 00:10:30.151 { 00:10:30.151 "name": "BaseBdev2", 00:10:30.151 "uuid": "c5280c76-35e0-4583-948c-9998d2f0f022", 00:10:30.151 "is_configured": true, 00:10:30.151 "data_offset": 2048, 00:10:30.151 "data_size": 63488 00:10:30.151 }, 00:10:30.151 { 00:10:30.151 "name": "BaseBdev3", 00:10:30.151 "uuid": "361a9c91-2af0-4101-9dce-75017b328bb3", 00:10:30.151 "is_configured": true, 00:10:30.151 "data_offset": 2048, 00:10:30.151 "data_size": 63488 00:10:30.151 }, 00:10:30.151 { 00:10:30.151 "name": "BaseBdev4", 00:10:30.151 "uuid": "aaaedf8f-8e29-4612-8b1c-0afb4d93b875", 00:10:30.151 "is_configured": true, 00:10:30.151 "data_offset": 2048, 00:10:30.151 "data_size": 63488 00:10:30.151 } 00:10:30.151 ] 00:10:30.151 }' 00:10:30.151 15:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:30.151 15:18:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.411 15:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:30.411 15:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:30.411 15:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:30.411 15:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:30.411 15:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:30.411 15:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:30.411 15:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:30.411 15:18:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.411 15:18:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.411 15:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:30.670 [2024-11-20 15:18:16.891344] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:30.670 15:18:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.670 15:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:30.670 "name": "Existed_Raid", 00:10:30.670 "aliases": [ 00:10:30.670 "2edf9928-4105-4d8c-85b8-3441108511dd" 00:10:30.670 ], 00:10:30.670 "product_name": "Raid Volume", 00:10:30.670 "block_size": 512, 00:10:30.670 "num_blocks": 253952, 00:10:30.670 "uuid": "2edf9928-4105-4d8c-85b8-3441108511dd", 00:10:30.670 "assigned_rate_limits": { 00:10:30.670 "rw_ios_per_sec": 0, 00:10:30.670 "rw_mbytes_per_sec": 0, 00:10:30.670 "r_mbytes_per_sec": 0, 00:10:30.670 "w_mbytes_per_sec": 0 00:10:30.670 }, 00:10:30.670 "claimed": false, 00:10:30.670 "zoned": false, 00:10:30.670 "supported_io_types": { 00:10:30.670 "read": true, 00:10:30.670 "write": true, 00:10:30.670 "unmap": true, 00:10:30.670 "flush": true, 00:10:30.670 "reset": true, 00:10:30.670 "nvme_admin": false, 00:10:30.670 "nvme_io": false, 00:10:30.670 "nvme_io_md": false, 00:10:30.670 "write_zeroes": true, 00:10:30.670 "zcopy": false, 00:10:30.670 "get_zone_info": false, 00:10:30.670 "zone_management": false, 00:10:30.670 "zone_append": false, 00:10:30.670 "compare": false, 00:10:30.670 "compare_and_write": false, 00:10:30.670 "abort": false, 00:10:30.670 "seek_hole": false, 00:10:30.670 "seek_data": false, 00:10:30.670 "copy": false, 00:10:30.670 "nvme_iov_md": false 00:10:30.670 }, 00:10:30.670 "memory_domains": [ 00:10:30.670 { 00:10:30.670 "dma_device_id": "system", 00:10:30.670 "dma_device_type": 1 00:10:30.670 }, 00:10:30.670 { 00:10:30.670 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:30.670 "dma_device_type": 2 00:10:30.670 }, 00:10:30.670 { 00:10:30.670 "dma_device_id": "system", 00:10:30.670 "dma_device_type": 1 00:10:30.670 }, 00:10:30.670 { 00:10:30.670 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:30.670 "dma_device_type": 2 00:10:30.670 }, 00:10:30.670 { 00:10:30.670 "dma_device_id": "system", 00:10:30.670 "dma_device_type": 1 00:10:30.670 }, 00:10:30.670 { 00:10:30.670 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:30.670 "dma_device_type": 2 00:10:30.670 }, 00:10:30.670 { 00:10:30.670 "dma_device_id": "system", 00:10:30.670 "dma_device_type": 1 00:10:30.670 }, 00:10:30.670 { 00:10:30.670 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:30.670 "dma_device_type": 2 00:10:30.670 } 00:10:30.670 ], 00:10:30.670 "driver_specific": { 00:10:30.670 "raid": { 00:10:30.670 "uuid": "2edf9928-4105-4d8c-85b8-3441108511dd", 00:10:30.670 "strip_size_kb": 64, 00:10:30.670 "state": "online", 00:10:30.670 "raid_level": "raid0", 00:10:30.670 "superblock": true, 00:10:30.670 "num_base_bdevs": 4, 00:10:30.670 "num_base_bdevs_discovered": 4, 00:10:30.670 "num_base_bdevs_operational": 4, 00:10:30.670 "base_bdevs_list": [ 00:10:30.670 { 00:10:30.670 "name": "NewBaseBdev", 00:10:30.670 "uuid": "d8b6a600-03bc-4ad8-964f-b6fa89d7e08f", 00:10:30.670 "is_configured": true, 00:10:30.670 "data_offset": 2048, 00:10:30.670 "data_size": 63488 00:10:30.670 }, 00:10:30.670 { 00:10:30.670 "name": "BaseBdev2", 00:10:30.670 "uuid": "c5280c76-35e0-4583-948c-9998d2f0f022", 00:10:30.670 "is_configured": true, 00:10:30.670 "data_offset": 2048, 00:10:30.670 "data_size": 63488 00:10:30.670 }, 00:10:30.670 { 00:10:30.670 "name": "BaseBdev3", 00:10:30.670 "uuid": "361a9c91-2af0-4101-9dce-75017b328bb3", 00:10:30.670 "is_configured": true, 00:10:30.670 "data_offset": 2048, 00:10:30.670 "data_size": 63488 00:10:30.670 }, 00:10:30.670 { 00:10:30.670 "name": "BaseBdev4", 00:10:30.670 "uuid": "aaaedf8f-8e29-4612-8b1c-0afb4d93b875", 00:10:30.670 "is_configured": true, 00:10:30.670 "data_offset": 2048, 00:10:30.670 "data_size": 63488 00:10:30.670 } 00:10:30.670 ] 00:10:30.670 } 00:10:30.670 } 00:10:30.670 }' 00:10:30.670 15:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:30.670 15:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:30.670 BaseBdev2 00:10:30.670 BaseBdev3 00:10:30.670 BaseBdev4' 00:10:30.670 15:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:30.670 15:18:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:30.670 15:18:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:30.670 15:18:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:30.670 15:18:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.670 15:18:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.670 15:18:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:30.670 15:18:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.670 15:18:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:30.670 15:18:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:30.670 15:18:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:30.670 15:18:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:30.670 15:18:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.670 15:18:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:30.670 15:18:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.670 15:18:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.670 15:18:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:30.670 15:18:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:30.670 15:18:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:30.670 15:18:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:30.670 15:18:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:30.670 15:18:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.670 15:18:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.670 15:18:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.929 15:18:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:30.929 15:18:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:30.929 15:18:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:30.929 15:18:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:30.929 15:18:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:30.929 15:18:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.929 15:18:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.929 15:18:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.929 15:18:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:30.929 15:18:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:30.930 15:18:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:30.930 15:18:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.930 15:18:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.930 [2024-11-20 15:18:17.202808] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:30.930 [2024-11-20 15:18:17.202955] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:30.930 [2024-11-20 15:18:17.203052] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:30.930 [2024-11-20 15:18:17.203121] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:30.930 [2024-11-20 15:18:17.203132] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:10:30.930 15:18:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.930 15:18:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 69895 00:10:30.930 15:18:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 69895 ']' 00:10:30.930 15:18:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 69895 00:10:30.930 15:18:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:10:30.930 15:18:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:30.930 15:18:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69895 00:10:30.930 15:18:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:30.930 killing process with pid 69895 00:10:30.930 15:18:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:30.930 15:18:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69895' 00:10:30.930 15:18:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 69895 00:10:30.930 [2024-11-20 15:18:17.257134] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:30.930 15:18:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 69895 00:10:31.189 [2024-11-20 15:18:17.657820] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:32.567 15:18:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:10:32.567 00:10:32.567 real 0m11.282s 00:10:32.567 user 0m17.837s 00:10:32.567 sys 0m2.286s 00:10:32.567 15:18:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:32.567 ************************************ 00:10:32.567 END TEST raid_state_function_test_sb 00:10:32.567 ************************************ 00:10:32.567 15:18:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:32.567 15:18:18 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:10:32.567 15:18:18 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:32.567 15:18:18 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:32.567 15:18:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:32.567 ************************************ 00:10:32.567 START TEST raid_superblock_test 00:10:32.567 ************************************ 00:10:32.567 15:18:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 4 00:10:32.567 15:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:10:32.567 15:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:10:32.567 15:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:10:32.567 15:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:10:32.567 15:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:10:32.567 15:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:10:32.567 15:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:10:32.567 15:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:10:32.567 15:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:10:32.567 15:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:10:32.567 15:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:10:32.567 15:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:10:32.567 15:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:10:32.567 15:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:10:32.567 15:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:10:32.567 15:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:10:32.567 15:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=70560 00:10:32.567 15:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 70560 00:10:32.567 15:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:10:32.567 15:18:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 70560 ']' 00:10:32.567 15:18:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:32.567 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:32.567 15:18:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:32.567 15:18:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:32.567 15:18:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:32.567 15:18:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.567 [2024-11-20 15:18:18.992379] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:10:32.567 [2024-11-20 15:18:18.992508] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70560 ] 00:10:32.839 [2024-11-20 15:18:19.175519] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:32.839 [2024-11-20 15:18:19.300720] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:33.096 [2024-11-20 15:18:19.518722] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:33.096 [2024-11-20 15:18:19.518784] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:33.665 15:18:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:33.665 15:18:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:10:33.665 15:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:10:33.665 15:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:33.665 15:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:10:33.665 15:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:10:33.665 15:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:10:33.665 15:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:33.665 15:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:33.665 15:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:33.665 15:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:10:33.665 15:18:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.665 15:18:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.665 malloc1 00:10:33.665 15:18:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.665 15:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:33.665 15:18:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.665 15:18:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.665 [2024-11-20 15:18:19.923265] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:33.665 [2024-11-20 15:18:19.923455] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:33.665 [2024-11-20 15:18:19.923516] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:33.665 [2024-11-20 15:18:19.923610] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:33.665 [2024-11-20 15:18:19.926028] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:33.665 [2024-11-20 15:18:19.926182] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:33.665 pt1 00:10:33.665 15:18:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.665 15:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:33.665 15:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:33.665 15:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:10:33.665 15:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:10:33.665 15:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:10:33.665 15:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:33.665 15:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:33.665 15:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:33.665 15:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:10:33.665 15:18:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.665 15:18:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.665 malloc2 00:10:33.665 15:18:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.665 15:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:33.665 15:18:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.665 15:18:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.665 [2024-11-20 15:18:19.979807] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:33.665 [2024-11-20 15:18:19.979871] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:33.665 [2024-11-20 15:18:19.979904] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:10:33.665 [2024-11-20 15:18:19.979916] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:33.665 [2024-11-20 15:18:19.982550] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:33.665 [2024-11-20 15:18:19.982592] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:33.665 pt2 00:10:33.665 15:18:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.665 15:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:33.665 15:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:33.665 15:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:10:33.665 15:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:10:33.665 15:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:10:33.665 15:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:33.665 15:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:33.665 15:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:33.665 15:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:10:33.665 15:18:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.665 15:18:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.665 malloc3 00:10:33.665 15:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.665 15:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:33.665 15:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.665 15:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.665 [2024-11-20 15:18:20.049258] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:33.666 [2024-11-20 15:18:20.049419] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:33.666 [2024-11-20 15:18:20.049479] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:10:33.666 [2024-11-20 15:18:20.049605] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:33.666 [2024-11-20 15:18:20.051990] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:33.666 [2024-11-20 15:18:20.052124] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:33.666 pt3 00:10:33.666 15:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.666 15:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:33.666 15:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:33.666 15:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:10:33.666 15:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:10:33.666 15:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:10:33.666 15:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:33.666 15:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:33.666 15:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:33.666 15:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:10:33.666 15:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.666 15:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.666 malloc4 00:10:33.666 15:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.666 15:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:10:33.666 15:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.666 15:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.666 [2024-11-20 15:18:20.111978] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:10:33.666 [2024-11-20 15:18:20.112164] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:33.666 [2024-11-20 15:18:20.112196] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:10:33.666 [2024-11-20 15:18:20.112207] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:33.666 [2024-11-20 15:18:20.114541] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:33.666 [2024-11-20 15:18:20.114580] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:10:33.666 pt4 00:10:33.666 15:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.666 15:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:33.666 15:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:33.666 15:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:10:33.666 15:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.666 15:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.666 [2024-11-20 15:18:20.124005] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:33.666 [2024-11-20 15:18:20.126052] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:33.666 [2024-11-20 15:18:20.126142] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:33.666 [2024-11-20 15:18:20.126186] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:10:33.666 [2024-11-20 15:18:20.126358] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:10:33.666 [2024-11-20 15:18:20.126371] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:33.666 [2024-11-20 15:18:20.126647] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:33.666 [2024-11-20 15:18:20.126864] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:10:33.666 [2024-11-20 15:18:20.126880] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:10:33.666 [2024-11-20 15:18:20.127044] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:33.666 15:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.666 15:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:33.666 15:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:33.666 15:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:33.666 15:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:33.666 15:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:33.666 15:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:33.666 15:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:33.666 15:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:33.666 15:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:33.666 15:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:33.666 15:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:33.666 15:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:33.666 15:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.666 15:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.926 15:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.926 15:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:33.926 "name": "raid_bdev1", 00:10:33.926 "uuid": "40e773d8-34bd-4fc4-9c43-1acbcc21f9d3", 00:10:33.926 "strip_size_kb": 64, 00:10:33.926 "state": "online", 00:10:33.926 "raid_level": "raid0", 00:10:33.926 "superblock": true, 00:10:33.926 "num_base_bdevs": 4, 00:10:33.926 "num_base_bdevs_discovered": 4, 00:10:33.926 "num_base_bdevs_operational": 4, 00:10:33.926 "base_bdevs_list": [ 00:10:33.926 { 00:10:33.926 "name": "pt1", 00:10:33.926 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:33.926 "is_configured": true, 00:10:33.926 "data_offset": 2048, 00:10:33.926 "data_size": 63488 00:10:33.926 }, 00:10:33.926 { 00:10:33.926 "name": "pt2", 00:10:33.926 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:33.926 "is_configured": true, 00:10:33.926 "data_offset": 2048, 00:10:33.926 "data_size": 63488 00:10:33.926 }, 00:10:33.926 { 00:10:33.926 "name": "pt3", 00:10:33.926 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:33.926 "is_configured": true, 00:10:33.926 "data_offset": 2048, 00:10:33.926 "data_size": 63488 00:10:33.926 }, 00:10:33.926 { 00:10:33.926 "name": "pt4", 00:10:33.926 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:33.926 "is_configured": true, 00:10:33.926 "data_offset": 2048, 00:10:33.926 "data_size": 63488 00:10:33.926 } 00:10:33.926 ] 00:10:33.926 }' 00:10:33.926 15:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:33.926 15:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.186 15:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:10:34.186 15:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:34.186 15:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:34.186 15:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:34.186 15:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:34.186 15:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:34.186 15:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:34.186 15:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.186 15:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:34.186 15:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.186 [2024-11-20 15:18:20.531697] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:34.186 15:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.186 15:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:34.186 "name": "raid_bdev1", 00:10:34.186 "aliases": [ 00:10:34.186 "40e773d8-34bd-4fc4-9c43-1acbcc21f9d3" 00:10:34.186 ], 00:10:34.186 "product_name": "Raid Volume", 00:10:34.186 "block_size": 512, 00:10:34.186 "num_blocks": 253952, 00:10:34.186 "uuid": "40e773d8-34bd-4fc4-9c43-1acbcc21f9d3", 00:10:34.186 "assigned_rate_limits": { 00:10:34.186 "rw_ios_per_sec": 0, 00:10:34.186 "rw_mbytes_per_sec": 0, 00:10:34.186 "r_mbytes_per_sec": 0, 00:10:34.186 "w_mbytes_per_sec": 0 00:10:34.186 }, 00:10:34.186 "claimed": false, 00:10:34.186 "zoned": false, 00:10:34.186 "supported_io_types": { 00:10:34.186 "read": true, 00:10:34.186 "write": true, 00:10:34.186 "unmap": true, 00:10:34.186 "flush": true, 00:10:34.186 "reset": true, 00:10:34.186 "nvme_admin": false, 00:10:34.186 "nvme_io": false, 00:10:34.186 "nvme_io_md": false, 00:10:34.186 "write_zeroes": true, 00:10:34.186 "zcopy": false, 00:10:34.186 "get_zone_info": false, 00:10:34.186 "zone_management": false, 00:10:34.186 "zone_append": false, 00:10:34.186 "compare": false, 00:10:34.186 "compare_and_write": false, 00:10:34.186 "abort": false, 00:10:34.186 "seek_hole": false, 00:10:34.186 "seek_data": false, 00:10:34.186 "copy": false, 00:10:34.186 "nvme_iov_md": false 00:10:34.186 }, 00:10:34.186 "memory_domains": [ 00:10:34.186 { 00:10:34.186 "dma_device_id": "system", 00:10:34.186 "dma_device_type": 1 00:10:34.186 }, 00:10:34.186 { 00:10:34.186 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:34.186 "dma_device_type": 2 00:10:34.186 }, 00:10:34.186 { 00:10:34.186 "dma_device_id": "system", 00:10:34.186 "dma_device_type": 1 00:10:34.186 }, 00:10:34.186 { 00:10:34.186 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:34.186 "dma_device_type": 2 00:10:34.186 }, 00:10:34.186 { 00:10:34.186 "dma_device_id": "system", 00:10:34.186 "dma_device_type": 1 00:10:34.186 }, 00:10:34.186 { 00:10:34.186 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:34.186 "dma_device_type": 2 00:10:34.186 }, 00:10:34.186 { 00:10:34.186 "dma_device_id": "system", 00:10:34.186 "dma_device_type": 1 00:10:34.186 }, 00:10:34.186 { 00:10:34.186 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:34.186 "dma_device_type": 2 00:10:34.186 } 00:10:34.186 ], 00:10:34.186 "driver_specific": { 00:10:34.186 "raid": { 00:10:34.186 "uuid": "40e773d8-34bd-4fc4-9c43-1acbcc21f9d3", 00:10:34.186 "strip_size_kb": 64, 00:10:34.186 "state": "online", 00:10:34.186 "raid_level": "raid0", 00:10:34.186 "superblock": true, 00:10:34.186 "num_base_bdevs": 4, 00:10:34.186 "num_base_bdevs_discovered": 4, 00:10:34.186 "num_base_bdevs_operational": 4, 00:10:34.186 "base_bdevs_list": [ 00:10:34.186 { 00:10:34.186 "name": "pt1", 00:10:34.186 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:34.186 "is_configured": true, 00:10:34.186 "data_offset": 2048, 00:10:34.186 "data_size": 63488 00:10:34.186 }, 00:10:34.186 { 00:10:34.186 "name": "pt2", 00:10:34.186 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:34.186 "is_configured": true, 00:10:34.186 "data_offset": 2048, 00:10:34.186 "data_size": 63488 00:10:34.186 }, 00:10:34.186 { 00:10:34.186 "name": "pt3", 00:10:34.186 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:34.186 "is_configured": true, 00:10:34.186 "data_offset": 2048, 00:10:34.186 "data_size": 63488 00:10:34.186 }, 00:10:34.187 { 00:10:34.187 "name": "pt4", 00:10:34.187 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:34.187 "is_configured": true, 00:10:34.187 "data_offset": 2048, 00:10:34.187 "data_size": 63488 00:10:34.187 } 00:10:34.187 ] 00:10:34.187 } 00:10:34.187 } 00:10:34.187 }' 00:10:34.187 15:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:34.187 15:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:34.187 pt2 00:10:34.187 pt3 00:10:34.187 pt4' 00:10:34.187 15:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:34.187 15:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:34.187 15:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:34.187 15:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:34.187 15:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.187 15:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.187 15:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:34.445 15:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.445 15:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:34.445 15:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:34.445 15:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:34.445 15:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:34.445 15:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:34.445 15:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.445 15:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.445 15:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.445 15:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:34.445 15:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:34.445 15:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:34.445 15:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:34.445 15:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.445 15:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:34.445 15:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.446 15:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.446 15:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:34.446 15:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:34.446 15:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:34.446 15:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:10:34.446 15:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:34.446 15:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.446 15:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.446 15:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.446 15:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:34.446 15:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:34.446 15:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:10:34.446 15:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:34.446 15:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.446 15:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.446 [2024-11-20 15:18:20.843215] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:34.446 15:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.446 15:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=40e773d8-34bd-4fc4-9c43-1acbcc21f9d3 00:10:34.446 15:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 40e773d8-34bd-4fc4-9c43-1acbcc21f9d3 ']' 00:10:34.446 15:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:34.446 15:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.446 15:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.446 [2024-11-20 15:18:20.886929] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:34.446 [2024-11-20 15:18:20.886959] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:34.446 [2024-11-20 15:18:20.887041] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:34.446 [2024-11-20 15:18:20.887112] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:34.446 [2024-11-20 15:18:20.887130] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:10:34.446 15:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.446 15:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:34.446 15:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.446 15:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.446 15:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:10:34.446 15:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.705 15:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:10:34.705 15:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:10:34.705 15:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:34.705 15:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:10:34.705 15:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.705 15:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.705 15:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.705 15:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:34.705 15:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:10:34.705 15:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.705 15:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.705 15:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.705 15:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:34.705 15:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:10:34.705 15:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.705 15:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.705 15:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.705 15:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:34.705 15:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:10:34.705 15:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.705 15:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.705 15:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.705 15:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:10:34.705 15:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.705 15:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:10:34.705 15:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.705 15:18:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.705 15:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:10:34.705 15:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:34.705 15:18:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:10:34.705 15:18:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:34.705 15:18:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:10:34.706 15:18:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:34.706 15:18:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:10:34.706 15:18:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:34.706 15:18:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:34.706 15:18:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.706 15:18:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.706 [2024-11-20 15:18:21.046887] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:10:34.706 [2024-11-20 15:18:21.048996] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:10:34.706 [2024-11-20 15:18:21.049049] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:10:34.706 [2024-11-20 15:18:21.049084] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:10:34.706 [2024-11-20 15:18:21.049135] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:10:34.706 [2024-11-20 15:18:21.049188] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:10:34.706 [2024-11-20 15:18:21.049211] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:10:34.706 [2024-11-20 15:18:21.049232] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:10:34.706 [2024-11-20 15:18:21.049249] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:34.706 [2024-11-20 15:18:21.049264] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:10:34.706 request: 00:10:34.706 { 00:10:34.706 "name": "raid_bdev1", 00:10:34.706 "raid_level": "raid0", 00:10:34.706 "base_bdevs": [ 00:10:34.706 "malloc1", 00:10:34.706 "malloc2", 00:10:34.706 "malloc3", 00:10:34.706 "malloc4" 00:10:34.706 ], 00:10:34.706 "strip_size_kb": 64, 00:10:34.706 "superblock": false, 00:10:34.706 "method": "bdev_raid_create", 00:10:34.706 "req_id": 1 00:10:34.706 } 00:10:34.706 Got JSON-RPC error response 00:10:34.706 response: 00:10:34.706 { 00:10:34.706 "code": -17, 00:10:34.706 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:10:34.706 } 00:10:34.706 15:18:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:34.706 15:18:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:10:34.706 15:18:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:34.706 15:18:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:34.706 15:18:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:34.706 15:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:34.706 15:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:10:34.706 15:18:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.706 15:18:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.706 15:18:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.706 15:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:10:34.706 15:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:10:34.706 15:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:34.706 15:18:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.706 15:18:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.706 [2024-11-20 15:18:21.106782] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:34.706 [2024-11-20 15:18:21.106968] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:34.706 [2024-11-20 15:18:21.106998] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:34.706 [2024-11-20 15:18:21.107012] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:34.706 [2024-11-20 15:18:21.109418] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:34.706 [2024-11-20 15:18:21.109460] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:34.706 [2024-11-20 15:18:21.109538] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:34.706 [2024-11-20 15:18:21.109594] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:34.706 pt1 00:10:34.706 15:18:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.706 15:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:10:34.706 15:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:34.706 15:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:34.706 15:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:34.706 15:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:34.706 15:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:34.706 15:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:34.706 15:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:34.706 15:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:34.706 15:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:34.706 15:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:34.706 15:18:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.706 15:18:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.706 15:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:34.706 15:18:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.706 15:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:34.706 "name": "raid_bdev1", 00:10:34.706 "uuid": "40e773d8-34bd-4fc4-9c43-1acbcc21f9d3", 00:10:34.706 "strip_size_kb": 64, 00:10:34.706 "state": "configuring", 00:10:34.706 "raid_level": "raid0", 00:10:34.706 "superblock": true, 00:10:34.706 "num_base_bdevs": 4, 00:10:34.706 "num_base_bdevs_discovered": 1, 00:10:34.706 "num_base_bdevs_operational": 4, 00:10:34.706 "base_bdevs_list": [ 00:10:34.706 { 00:10:34.706 "name": "pt1", 00:10:34.706 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:34.706 "is_configured": true, 00:10:34.706 "data_offset": 2048, 00:10:34.706 "data_size": 63488 00:10:34.706 }, 00:10:34.706 { 00:10:34.706 "name": null, 00:10:34.706 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:34.706 "is_configured": false, 00:10:34.706 "data_offset": 2048, 00:10:34.706 "data_size": 63488 00:10:34.706 }, 00:10:34.706 { 00:10:34.706 "name": null, 00:10:34.706 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:34.706 "is_configured": false, 00:10:34.706 "data_offset": 2048, 00:10:34.706 "data_size": 63488 00:10:34.706 }, 00:10:34.706 { 00:10:34.706 "name": null, 00:10:34.706 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:34.706 "is_configured": false, 00:10:34.706 "data_offset": 2048, 00:10:34.706 "data_size": 63488 00:10:34.706 } 00:10:34.706 ] 00:10:34.706 }' 00:10:34.706 15:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:34.706 15:18:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.275 15:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:10:35.275 15:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:35.275 15:18:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.275 15:18:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.275 [2024-11-20 15:18:21.506454] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:35.275 [2024-11-20 15:18:21.506525] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:35.275 [2024-11-20 15:18:21.506546] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:10:35.275 [2024-11-20 15:18:21.506561] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:35.275 [2024-11-20 15:18:21.507051] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:35.275 [2024-11-20 15:18:21.507081] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:35.275 [2024-11-20 15:18:21.507165] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:35.275 [2024-11-20 15:18:21.507192] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:35.275 pt2 00:10:35.275 15:18:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.275 15:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:10:35.275 15:18:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.275 15:18:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.275 [2024-11-20 15:18:21.514438] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:10:35.275 15:18:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.275 15:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:10:35.275 15:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:35.275 15:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:35.275 15:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:35.275 15:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:35.275 15:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:35.275 15:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:35.275 15:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:35.275 15:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:35.275 15:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:35.275 15:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:35.275 15:18:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.275 15:18:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.275 15:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:35.275 15:18:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.275 15:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:35.275 "name": "raid_bdev1", 00:10:35.275 "uuid": "40e773d8-34bd-4fc4-9c43-1acbcc21f9d3", 00:10:35.275 "strip_size_kb": 64, 00:10:35.275 "state": "configuring", 00:10:35.275 "raid_level": "raid0", 00:10:35.275 "superblock": true, 00:10:35.275 "num_base_bdevs": 4, 00:10:35.275 "num_base_bdevs_discovered": 1, 00:10:35.275 "num_base_bdevs_operational": 4, 00:10:35.275 "base_bdevs_list": [ 00:10:35.275 { 00:10:35.275 "name": "pt1", 00:10:35.275 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:35.275 "is_configured": true, 00:10:35.275 "data_offset": 2048, 00:10:35.275 "data_size": 63488 00:10:35.275 }, 00:10:35.275 { 00:10:35.275 "name": null, 00:10:35.275 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:35.275 "is_configured": false, 00:10:35.275 "data_offset": 0, 00:10:35.275 "data_size": 63488 00:10:35.275 }, 00:10:35.275 { 00:10:35.275 "name": null, 00:10:35.275 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:35.275 "is_configured": false, 00:10:35.275 "data_offset": 2048, 00:10:35.275 "data_size": 63488 00:10:35.275 }, 00:10:35.275 { 00:10:35.275 "name": null, 00:10:35.275 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:35.275 "is_configured": false, 00:10:35.275 "data_offset": 2048, 00:10:35.275 "data_size": 63488 00:10:35.275 } 00:10:35.275 ] 00:10:35.275 }' 00:10:35.275 15:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:35.275 15:18:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.535 15:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:10:35.535 15:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:35.535 15:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:35.535 15:18:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.535 15:18:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.535 [2024-11-20 15:18:21.929850] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:35.535 [2024-11-20 15:18:21.929914] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:35.535 [2024-11-20 15:18:21.929937] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:10:35.535 [2024-11-20 15:18:21.929949] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:35.535 [2024-11-20 15:18:21.930394] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:35.535 [2024-11-20 15:18:21.930412] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:35.535 [2024-11-20 15:18:21.930493] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:35.535 [2024-11-20 15:18:21.930516] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:35.535 pt2 00:10:35.535 15:18:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.535 15:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:35.535 15:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:35.535 15:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:35.535 15:18:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.535 15:18:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.535 [2024-11-20 15:18:21.941826] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:35.535 [2024-11-20 15:18:21.941876] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:35.535 [2024-11-20 15:18:21.941897] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:10:35.535 [2024-11-20 15:18:21.941908] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:35.535 [2024-11-20 15:18:21.942313] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:35.535 [2024-11-20 15:18:21.942331] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:35.535 [2024-11-20 15:18:21.942401] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:35.535 [2024-11-20 15:18:21.942427] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:35.535 pt3 00:10:35.535 15:18:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.535 15:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:35.535 15:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:35.535 15:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:10:35.535 15:18:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.535 15:18:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.535 [2024-11-20 15:18:21.953763] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:10:35.535 [2024-11-20 15:18:21.953807] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:35.535 [2024-11-20 15:18:21.953827] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:10:35.535 [2024-11-20 15:18:21.953837] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:35.535 [2024-11-20 15:18:21.954221] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:35.535 [2024-11-20 15:18:21.954246] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:10:35.535 [2024-11-20 15:18:21.954309] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:10:35.535 [2024-11-20 15:18:21.954332] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:10:35.535 [2024-11-20 15:18:21.954468] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:35.535 [2024-11-20 15:18:21.954477] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:35.535 [2024-11-20 15:18:21.954724] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:10:35.535 [2024-11-20 15:18:21.954872] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:35.535 [2024-11-20 15:18:21.954886] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:10:35.535 [2024-11-20 15:18:21.955016] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:35.535 pt4 00:10:35.536 15:18:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.536 15:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:35.536 15:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:35.536 15:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:35.536 15:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:35.536 15:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:35.536 15:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:35.536 15:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:35.536 15:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:35.536 15:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:35.536 15:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:35.536 15:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:35.536 15:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:35.536 15:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:35.536 15:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:35.536 15:18:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.536 15:18:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.536 15:18:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.536 15:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:35.536 "name": "raid_bdev1", 00:10:35.536 "uuid": "40e773d8-34bd-4fc4-9c43-1acbcc21f9d3", 00:10:35.536 "strip_size_kb": 64, 00:10:35.536 "state": "online", 00:10:35.536 "raid_level": "raid0", 00:10:35.536 "superblock": true, 00:10:35.536 "num_base_bdevs": 4, 00:10:35.536 "num_base_bdevs_discovered": 4, 00:10:35.536 "num_base_bdevs_operational": 4, 00:10:35.536 "base_bdevs_list": [ 00:10:35.536 { 00:10:35.536 "name": "pt1", 00:10:35.536 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:35.536 "is_configured": true, 00:10:35.536 "data_offset": 2048, 00:10:35.536 "data_size": 63488 00:10:35.536 }, 00:10:35.536 { 00:10:35.536 "name": "pt2", 00:10:35.536 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:35.536 "is_configured": true, 00:10:35.536 "data_offset": 2048, 00:10:35.536 "data_size": 63488 00:10:35.536 }, 00:10:35.536 { 00:10:35.536 "name": "pt3", 00:10:35.536 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:35.536 "is_configured": true, 00:10:35.536 "data_offset": 2048, 00:10:35.536 "data_size": 63488 00:10:35.536 }, 00:10:35.536 { 00:10:35.536 "name": "pt4", 00:10:35.536 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:35.536 "is_configured": true, 00:10:35.536 "data_offset": 2048, 00:10:35.536 "data_size": 63488 00:10:35.536 } 00:10:35.536 ] 00:10:35.536 }' 00:10:35.536 15:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:35.536 15:18:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.114 15:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:10:36.114 15:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:36.114 15:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:36.114 15:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:36.114 15:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:36.114 15:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:36.114 15:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:36.114 15:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:36.114 15:18:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.114 15:18:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.114 [2024-11-20 15:18:22.401967] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:36.114 15:18:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.114 15:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:36.114 "name": "raid_bdev1", 00:10:36.114 "aliases": [ 00:10:36.114 "40e773d8-34bd-4fc4-9c43-1acbcc21f9d3" 00:10:36.114 ], 00:10:36.114 "product_name": "Raid Volume", 00:10:36.114 "block_size": 512, 00:10:36.114 "num_blocks": 253952, 00:10:36.114 "uuid": "40e773d8-34bd-4fc4-9c43-1acbcc21f9d3", 00:10:36.114 "assigned_rate_limits": { 00:10:36.114 "rw_ios_per_sec": 0, 00:10:36.114 "rw_mbytes_per_sec": 0, 00:10:36.114 "r_mbytes_per_sec": 0, 00:10:36.114 "w_mbytes_per_sec": 0 00:10:36.114 }, 00:10:36.114 "claimed": false, 00:10:36.114 "zoned": false, 00:10:36.114 "supported_io_types": { 00:10:36.114 "read": true, 00:10:36.114 "write": true, 00:10:36.114 "unmap": true, 00:10:36.114 "flush": true, 00:10:36.114 "reset": true, 00:10:36.114 "nvme_admin": false, 00:10:36.114 "nvme_io": false, 00:10:36.114 "nvme_io_md": false, 00:10:36.114 "write_zeroes": true, 00:10:36.114 "zcopy": false, 00:10:36.114 "get_zone_info": false, 00:10:36.114 "zone_management": false, 00:10:36.114 "zone_append": false, 00:10:36.114 "compare": false, 00:10:36.114 "compare_and_write": false, 00:10:36.114 "abort": false, 00:10:36.114 "seek_hole": false, 00:10:36.114 "seek_data": false, 00:10:36.114 "copy": false, 00:10:36.114 "nvme_iov_md": false 00:10:36.114 }, 00:10:36.114 "memory_domains": [ 00:10:36.114 { 00:10:36.114 "dma_device_id": "system", 00:10:36.114 "dma_device_type": 1 00:10:36.114 }, 00:10:36.114 { 00:10:36.114 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:36.114 "dma_device_type": 2 00:10:36.114 }, 00:10:36.114 { 00:10:36.114 "dma_device_id": "system", 00:10:36.114 "dma_device_type": 1 00:10:36.114 }, 00:10:36.114 { 00:10:36.114 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:36.114 "dma_device_type": 2 00:10:36.114 }, 00:10:36.114 { 00:10:36.114 "dma_device_id": "system", 00:10:36.114 "dma_device_type": 1 00:10:36.114 }, 00:10:36.114 { 00:10:36.114 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:36.114 "dma_device_type": 2 00:10:36.114 }, 00:10:36.114 { 00:10:36.114 "dma_device_id": "system", 00:10:36.114 "dma_device_type": 1 00:10:36.114 }, 00:10:36.114 { 00:10:36.114 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:36.114 "dma_device_type": 2 00:10:36.114 } 00:10:36.114 ], 00:10:36.114 "driver_specific": { 00:10:36.114 "raid": { 00:10:36.114 "uuid": "40e773d8-34bd-4fc4-9c43-1acbcc21f9d3", 00:10:36.114 "strip_size_kb": 64, 00:10:36.114 "state": "online", 00:10:36.114 "raid_level": "raid0", 00:10:36.114 "superblock": true, 00:10:36.114 "num_base_bdevs": 4, 00:10:36.115 "num_base_bdevs_discovered": 4, 00:10:36.115 "num_base_bdevs_operational": 4, 00:10:36.115 "base_bdevs_list": [ 00:10:36.115 { 00:10:36.115 "name": "pt1", 00:10:36.115 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:36.115 "is_configured": true, 00:10:36.115 "data_offset": 2048, 00:10:36.115 "data_size": 63488 00:10:36.115 }, 00:10:36.115 { 00:10:36.115 "name": "pt2", 00:10:36.115 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:36.115 "is_configured": true, 00:10:36.115 "data_offset": 2048, 00:10:36.115 "data_size": 63488 00:10:36.115 }, 00:10:36.115 { 00:10:36.115 "name": "pt3", 00:10:36.115 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:36.115 "is_configured": true, 00:10:36.115 "data_offset": 2048, 00:10:36.115 "data_size": 63488 00:10:36.115 }, 00:10:36.115 { 00:10:36.115 "name": "pt4", 00:10:36.115 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:36.115 "is_configured": true, 00:10:36.115 "data_offset": 2048, 00:10:36.115 "data_size": 63488 00:10:36.115 } 00:10:36.115 ] 00:10:36.115 } 00:10:36.115 } 00:10:36.115 }' 00:10:36.115 15:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:36.115 15:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:36.115 pt2 00:10:36.115 pt3 00:10:36.115 pt4' 00:10:36.115 15:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:36.115 15:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:36.115 15:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:36.115 15:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:36.115 15:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:36.115 15:18:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.115 15:18:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.115 15:18:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.115 15:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:36.115 15:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:36.115 15:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:36.115 15:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:36.115 15:18:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.115 15:18:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.115 15:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:36.373 15:18:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.373 15:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:36.373 15:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:36.373 15:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:36.373 15:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:36.374 15:18:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.374 15:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:36.374 15:18:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.374 15:18:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.374 15:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:36.374 15:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:36.374 15:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:36.374 15:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:36.374 15:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:10:36.374 15:18:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.374 15:18:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.374 15:18:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.374 15:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:36.374 15:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:36.374 15:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:10:36.374 15:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:36.374 15:18:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.374 15:18:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.374 [2024-11-20 15:18:22.709474] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:36.374 15:18:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.374 15:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 40e773d8-34bd-4fc4-9c43-1acbcc21f9d3 '!=' 40e773d8-34bd-4fc4-9c43-1acbcc21f9d3 ']' 00:10:36.374 15:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:10:36.374 15:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:36.374 15:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:36.374 15:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 70560 00:10:36.374 15:18:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 70560 ']' 00:10:36.374 15:18:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 70560 00:10:36.374 15:18:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:10:36.374 15:18:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:36.374 15:18:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70560 00:10:36.374 killing process with pid 70560 00:10:36.374 15:18:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:36.374 15:18:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:36.374 15:18:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70560' 00:10:36.374 15:18:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 70560 00:10:36.374 [2024-11-20 15:18:22.796324] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:36.374 [2024-11-20 15:18:22.796409] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:36.374 15:18:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 70560 00:10:36.374 [2024-11-20 15:18:22.796483] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:36.374 [2024-11-20 15:18:22.796495] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:10:36.941 [2024-11-20 15:18:23.197457] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:37.876 ************************************ 00:10:37.876 END TEST raid_superblock_test 00:10:37.876 ************************************ 00:10:37.876 15:18:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:10:37.876 00:10:37.876 real 0m5.452s 00:10:37.876 user 0m7.716s 00:10:37.876 sys 0m1.098s 00:10:37.876 15:18:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:37.876 15:18:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.135 15:18:24 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 4 read 00:10:38.135 15:18:24 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:38.135 15:18:24 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:38.135 15:18:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:38.135 ************************************ 00:10:38.135 START TEST raid_read_error_test 00:10:38.135 ************************************ 00:10:38.135 15:18:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 4 read 00:10:38.135 15:18:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:10:38.135 15:18:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:10:38.135 15:18:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:10:38.135 15:18:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:38.135 15:18:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:38.135 15:18:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:38.135 15:18:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:38.135 15:18:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:38.135 15:18:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:38.135 15:18:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:38.135 15:18:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:38.135 15:18:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:38.135 15:18:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:38.135 15:18:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:38.135 15:18:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:10:38.135 15:18:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:38.135 15:18:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:38.135 15:18:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:38.135 15:18:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:38.135 15:18:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:38.135 15:18:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:38.135 15:18:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:38.135 15:18:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:38.135 15:18:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:38.135 15:18:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:10:38.135 15:18:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:38.135 15:18:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:38.135 15:18:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:38.135 15:18:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.Kjy7JzwOq6 00:10:38.135 15:18:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=70819 00:10:38.135 15:18:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 70819 00:10:38.135 15:18:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:38.135 15:18:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 70819 ']' 00:10:38.135 15:18:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:38.135 15:18:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:38.135 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:38.135 15:18:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:38.135 15:18:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:38.135 15:18:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.135 [2024-11-20 15:18:24.513905] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:10:38.136 [2024-11-20 15:18:24.514074] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70819 ] 00:10:38.395 [2024-11-20 15:18:24.686519] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:38.395 [2024-11-20 15:18:24.807771] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:38.654 [2024-11-20 15:18:25.021464] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:38.654 [2024-11-20 15:18:25.021516] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:38.913 15:18:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:38.913 15:18:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:10:38.913 15:18:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:38.913 15:18:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:38.913 15:18:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.913 15:18:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.913 BaseBdev1_malloc 00:10:38.913 15:18:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.913 15:18:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:38.913 15:18:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.913 15:18:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.913 true 00:10:38.913 15:18:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.913 15:18:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:38.913 15:18:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.913 15:18:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.913 [2024-11-20 15:18:25.389474] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:38.913 [2024-11-20 15:18:25.389539] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:38.913 [2024-11-20 15:18:25.389565] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:38.913 [2024-11-20 15:18:25.389580] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:38.913 [2024-11-20 15:18:25.392007] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:38.913 [2024-11-20 15:18:25.392055] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:38.913 BaseBdev1 00:10:38.913 15:18:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.172 15:18:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:39.172 15:18:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:39.172 15:18:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.172 15:18:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.172 BaseBdev2_malloc 00:10:39.172 15:18:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.172 15:18:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:39.172 15:18:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.172 15:18:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.172 true 00:10:39.172 15:18:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.172 15:18:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:39.172 15:18:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.172 15:18:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.172 [2024-11-20 15:18:25.443153] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:39.172 [2024-11-20 15:18:25.443220] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:39.172 [2024-11-20 15:18:25.443240] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:39.172 [2024-11-20 15:18:25.443256] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:39.172 [2024-11-20 15:18:25.445641] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:39.172 [2024-11-20 15:18:25.445696] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:39.172 BaseBdev2 00:10:39.172 15:18:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.172 15:18:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:39.172 15:18:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:39.172 15:18:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.172 15:18:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.172 BaseBdev3_malloc 00:10:39.172 15:18:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.172 15:18:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:39.172 15:18:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.172 15:18:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.172 true 00:10:39.172 15:18:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.172 15:18:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:39.172 15:18:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.172 15:18:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.172 [2024-11-20 15:18:25.515397] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:39.172 [2024-11-20 15:18:25.515461] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:39.172 [2024-11-20 15:18:25.515484] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:39.172 [2024-11-20 15:18:25.515499] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:39.172 [2024-11-20 15:18:25.517975] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:39.172 [2024-11-20 15:18:25.518018] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:39.172 BaseBdev3 00:10:39.172 15:18:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.172 15:18:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:39.172 15:18:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:10:39.172 15:18:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.172 15:18:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.172 BaseBdev4_malloc 00:10:39.172 15:18:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.172 15:18:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:10:39.172 15:18:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.172 15:18:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.172 true 00:10:39.172 15:18:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.172 15:18:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:10:39.172 15:18:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.172 15:18:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.172 [2024-11-20 15:18:25.572184] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:10:39.172 [2024-11-20 15:18:25.572246] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:39.173 [2024-11-20 15:18:25.572269] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:39.173 [2024-11-20 15:18:25.572284] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:39.173 [2024-11-20 15:18:25.574875] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:39.173 [2024-11-20 15:18:25.574919] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:10:39.173 BaseBdev4 00:10:39.173 15:18:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.173 15:18:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:10:39.173 15:18:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.173 15:18:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.173 [2024-11-20 15:18:25.580247] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:39.173 [2024-11-20 15:18:25.582395] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:39.173 [2024-11-20 15:18:25.582477] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:39.173 [2024-11-20 15:18:25.582545] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:39.173 [2024-11-20 15:18:25.582810] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:10:39.173 [2024-11-20 15:18:25.582834] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:39.173 [2024-11-20 15:18:25.583133] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:10:39.173 [2024-11-20 15:18:25.583317] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:10:39.173 [2024-11-20 15:18:25.583335] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:10:39.173 [2024-11-20 15:18:25.583531] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:39.173 15:18:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.173 15:18:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:39.173 15:18:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:39.173 15:18:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:39.173 15:18:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:39.173 15:18:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:39.173 15:18:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:39.173 15:18:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:39.173 15:18:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:39.173 15:18:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:39.173 15:18:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:39.173 15:18:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:39.173 15:18:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.173 15:18:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:39.173 15:18:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.173 15:18:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.173 15:18:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:39.173 "name": "raid_bdev1", 00:10:39.173 "uuid": "21731871-9da3-4ac2-a898-0da221584c33", 00:10:39.173 "strip_size_kb": 64, 00:10:39.173 "state": "online", 00:10:39.173 "raid_level": "raid0", 00:10:39.173 "superblock": true, 00:10:39.173 "num_base_bdevs": 4, 00:10:39.173 "num_base_bdevs_discovered": 4, 00:10:39.173 "num_base_bdevs_operational": 4, 00:10:39.173 "base_bdevs_list": [ 00:10:39.173 { 00:10:39.173 "name": "BaseBdev1", 00:10:39.173 "uuid": "bb96b756-a514-5484-aac0-7e88e0785fe2", 00:10:39.173 "is_configured": true, 00:10:39.173 "data_offset": 2048, 00:10:39.173 "data_size": 63488 00:10:39.173 }, 00:10:39.173 { 00:10:39.173 "name": "BaseBdev2", 00:10:39.173 "uuid": "ceeb161c-9542-58c5-a175-588d7b75ecce", 00:10:39.173 "is_configured": true, 00:10:39.173 "data_offset": 2048, 00:10:39.173 "data_size": 63488 00:10:39.173 }, 00:10:39.173 { 00:10:39.173 "name": "BaseBdev3", 00:10:39.173 "uuid": "60643669-3bfa-54e4-bf1f-4a81a1008f3b", 00:10:39.173 "is_configured": true, 00:10:39.173 "data_offset": 2048, 00:10:39.173 "data_size": 63488 00:10:39.173 }, 00:10:39.173 { 00:10:39.173 "name": "BaseBdev4", 00:10:39.173 "uuid": "3259b14d-e5bd-5528-b341-1c7f5da97682", 00:10:39.173 "is_configured": true, 00:10:39.173 "data_offset": 2048, 00:10:39.173 "data_size": 63488 00:10:39.173 } 00:10:39.173 ] 00:10:39.173 }' 00:10:39.173 15:18:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:39.173 15:18:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.741 15:18:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:39.741 15:18:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:39.741 [2024-11-20 15:18:26.084899] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:10:40.677 15:18:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:10:40.677 15:18:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.677 15:18:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.677 15:18:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.677 15:18:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:40.677 15:18:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:10:40.677 15:18:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:10:40.677 15:18:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:40.677 15:18:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:40.677 15:18:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:40.677 15:18:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:40.677 15:18:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:40.677 15:18:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:40.677 15:18:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:40.677 15:18:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:40.677 15:18:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:40.677 15:18:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:40.677 15:18:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:40.677 15:18:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.677 15:18:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:40.677 15:18:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.677 15:18:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.677 15:18:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:40.677 "name": "raid_bdev1", 00:10:40.677 "uuid": "21731871-9da3-4ac2-a898-0da221584c33", 00:10:40.677 "strip_size_kb": 64, 00:10:40.677 "state": "online", 00:10:40.677 "raid_level": "raid0", 00:10:40.677 "superblock": true, 00:10:40.677 "num_base_bdevs": 4, 00:10:40.677 "num_base_bdevs_discovered": 4, 00:10:40.677 "num_base_bdevs_operational": 4, 00:10:40.677 "base_bdevs_list": [ 00:10:40.677 { 00:10:40.677 "name": "BaseBdev1", 00:10:40.677 "uuid": "bb96b756-a514-5484-aac0-7e88e0785fe2", 00:10:40.677 "is_configured": true, 00:10:40.677 "data_offset": 2048, 00:10:40.677 "data_size": 63488 00:10:40.677 }, 00:10:40.677 { 00:10:40.677 "name": "BaseBdev2", 00:10:40.677 "uuid": "ceeb161c-9542-58c5-a175-588d7b75ecce", 00:10:40.677 "is_configured": true, 00:10:40.677 "data_offset": 2048, 00:10:40.677 "data_size": 63488 00:10:40.677 }, 00:10:40.677 { 00:10:40.677 "name": "BaseBdev3", 00:10:40.677 "uuid": "60643669-3bfa-54e4-bf1f-4a81a1008f3b", 00:10:40.677 "is_configured": true, 00:10:40.677 "data_offset": 2048, 00:10:40.677 "data_size": 63488 00:10:40.677 }, 00:10:40.677 { 00:10:40.677 "name": "BaseBdev4", 00:10:40.677 "uuid": "3259b14d-e5bd-5528-b341-1c7f5da97682", 00:10:40.677 "is_configured": true, 00:10:40.677 "data_offset": 2048, 00:10:40.677 "data_size": 63488 00:10:40.677 } 00:10:40.677 ] 00:10:40.677 }' 00:10:40.677 15:18:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:40.677 15:18:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.245 15:18:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:41.245 15:18:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.245 15:18:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.245 [2024-11-20 15:18:27.433204] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:41.245 [2024-11-20 15:18:27.433246] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:41.245 [2024-11-20 15:18:27.435871] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:41.245 [2024-11-20 15:18:27.435938] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:41.245 [2024-11-20 15:18:27.435985] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:41.245 [2024-11-20 15:18:27.435999] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:10:41.245 15:18:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.245 { 00:10:41.245 "results": [ 00:10:41.245 { 00:10:41.245 "job": "raid_bdev1", 00:10:41.245 "core_mask": "0x1", 00:10:41.245 "workload": "randrw", 00:10:41.245 "percentage": 50, 00:10:41.245 "status": "finished", 00:10:41.245 "queue_depth": 1, 00:10:41.245 "io_size": 131072, 00:10:41.245 "runtime": 1.348354, 00:10:41.245 "iops": 15738.448508329415, 00:10:41.245 "mibps": 1967.3060635411769, 00:10:41.245 "io_failed": 1, 00:10:41.245 "io_timeout": 0, 00:10:41.245 "avg_latency_us": 87.92929062399821, 00:10:41.245 "min_latency_us": 27.142168674698794, 00:10:41.245 "max_latency_us": 1414.6827309236949 00:10:41.245 } 00:10:41.245 ], 00:10:41.245 "core_count": 1 00:10:41.245 } 00:10:41.245 15:18:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 70819 00:10:41.245 15:18:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 70819 ']' 00:10:41.245 15:18:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 70819 00:10:41.245 15:18:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:10:41.245 15:18:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:41.245 15:18:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70819 00:10:41.245 15:18:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:41.245 15:18:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:41.245 killing process with pid 70819 00:10:41.245 15:18:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70819' 00:10:41.245 15:18:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 70819 00:10:41.245 15:18:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 70819 00:10:41.245 [2024-11-20 15:18:27.474784] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:41.505 [2024-11-20 15:18:27.804165] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:42.885 15:18:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.Kjy7JzwOq6 00:10:42.885 15:18:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:42.885 15:18:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:42.885 15:18:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:10:42.885 15:18:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:10:42.885 15:18:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:42.885 15:18:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:42.885 15:18:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:10:42.885 00:10:42.885 real 0m4.617s 00:10:42.885 user 0m5.391s 00:10:42.885 sys 0m0.596s 00:10:42.885 15:18:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:42.885 ************************************ 00:10:42.885 END TEST raid_read_error_test 00:10:42.885 15:18:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.885 ************************************ 00:10:42.885 15:18:29 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 4 write 00:10:42.885 15:18:29 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:42.885 15:18:29 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:42.885 15:18:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:42.885 ************************************ 00:10:42.885 START TEST raid_write_error_test 00:10:42.885 ************************************ 00:10:42.885 15:18:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 4 write 00:10:42.885 15:18:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:10:42.885 15:18:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:10:42.885 15:18:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:10:42.885 15:18:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:42.885 15:18:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:42.885 15:18:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:42.885 15:18:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:42.885 15:18:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:42.885 15:18:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:42.885 15:18:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:42.885 15:18:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:42.886 15:18:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:42.886 15:18:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:42.886 15:18:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:42.886 15:18:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:10:42.886 15:18:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:42.886 15:18:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:42.886 15:18:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:42.886 15:18:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:42.886 15:18:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:42.886 15:18:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:42.886 15:18:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:42.886 15:18:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:42.886 15:18:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:42.886 15:18:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:10:42.886 15:18:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:42.886 15:18:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:42.886 15:18:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:42.886 15:18:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.kwp6HjHD1u 00:10:42.886 15:18:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=70969 00:10:42.886 15:18:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 70969 00:10:42.886 15:18:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:42.886 15:18:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 70969 ']' 00:10:42.886 15:18:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:42.886 15:18:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:42.886 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:42.886 15:18:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:42.886 15:18:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:42.886 15:18:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.886 [2024-11-20 15:18:29.203032] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:10:42.886 [2024-11-20 15:18:29.203177] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70969 ] 00:10:43.144 [2024-11-20 15:18:29.383791] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:43.144 [2024-11-20 15:18:29.504758] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:43.403 [2024-11-20 15:18:29.715564] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:43.403 [2024-11-20 15:18:29.715638] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:43.662 15:18:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:43.662 15:18:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:10:43.662 15:18:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:43.662 15:18:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:43.662 15:18:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.662 15:18:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.662 BaseBdev1_malloc 00:10:43.662 15:18:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.662 15:18:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:43.662 15:18:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.662 15:18:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.662 true 00:10:43.662 15:18:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.662 15:18:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:43.662 15:18:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.662 15:18:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.662 [2024-11-20 15:18:30.075110] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:43.662 [2024-11-20 15:18:30.075173] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:43.662 [2024-11-20 15:18:30.075197] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:43.662 [2024-11-20 15:18:30.075213] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:43.662 [2024-11-20 15:18:30.077591] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:43.662 [2024-11-20 15:18:30.077638] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:43.662 BaseBdev1 00:10:43.662 15:18:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.662 15:18:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:43.662 15:18:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:43.662 15:18:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.662 15:18:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.662 BaseBdev2_malloc 00:10:43.662 15:18:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.662 15:18:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:43.662 15:18:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.662 15:18:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.662 true 00:10:43.662 15:18:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.662 15:18:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:43.662 15:18:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.662 15:18:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.662 [2024-11-20 15:18:30.132255] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:43.662 [2024-11-20 15:18:30.132320] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:43.662 [2024-11-20 15:18:30.132340] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:43.662 [2024-11-20 15:18:30.132354] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:43.662 [2024-11-20 15:18:30.134831] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:43.662 [2024-11-20 15:18:30.134874] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:43.662 BaseBdev2 00:10:43.662 15:18:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.662 15:18:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:43.662 15:18:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:43.662 15:18:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.662 15:18:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.923 BaseBdev3_malloc 00:10:43.923 15:18:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.923 15:18:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:43.923 15:18:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.923 15:18:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.923 true 00:10:43.923 15:18:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.923 15:18:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:43.923 15:18:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.923 15:18:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.923 [2024-11-20 15:18:30.198851] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:43.923 [2024-11-20 15:18:30.198915] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:43.923 [2024-11-20 15:18:30.198953] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:43.923 [2024-11-20 15:18:30.198969] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:43.923 [2024-11-20 15:18:30.201545] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:43.923 [2024-11-20 15:18:30.201592] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:43.923 BaseBdev3 00:10:43.923 15:18:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.923 15:18:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:43.923 15:18:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:10:43.923 15:18:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.923 15:18:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.923 BaseBdev4_malloc 00:10:43.923 15:18:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.923 15:18:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:10:43.923 15:18:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.923 15:18:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.923 true 00:10:43.923 15:18:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.923 15:18:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:10:43.923 15:18:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.923 15:18:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.923 [2024-11-20 15:18:30.256875] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:10:43.923 [2024-11-20 15:18:30.256938] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:43.923 [2024-11-20 15:18:30.256960] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:43.923 [2024-11-20 15:18:30.256974] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:43.923 [2024-11-20 15:18:30.259365] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:43.923 [2024-11-20 15:18:30.259418] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:10:43.923 BaseBdev4 00:10:43.923 15:18:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.923 15:18:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:10:43.923 15:18:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.923 15:18:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.923 [2024-11-20 15:18:30.264938] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:43.923 [2024-11-20 15:18:30.267038] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:43.923 [2024-11-20 15:18:30.267120] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:43.923 [2024-11-20 15:18:30.267185] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:43.923 [2024-11-20 15:18:30.267402] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:10:43.923 [2024-11-20 15:18:30.267421] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:43.923 [2024-11-20 15:18:30.267718] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:10:43.923 [2024-11-20 15:18:30.267892] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:10:43.923 [2024-11-20 15:18:30.267905] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:10:43.923 [2024-11-20 15:18:30.268073] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:43.923 15:18:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.923 15:18:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:43.923 15:18:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:43.923 15:18:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:43.923 15:18:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:43.923 15:18:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:43.923 15:18:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:43.923 15:18:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:43.923 15:18:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:43.923 15:18:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:43.923 15:18:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:43.923 15:18:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:43.923 15:18:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.923 15:18:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:43.923 15:18:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.923 15:18:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.923 15:18:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:43.923 "name": "raid_bdev1", 00:10:43.923 "uuid": "7e048286-4333-4cd5-b3b2-ba45b15a6702", 00:10:43.923 "strip_size_kb": 64, 00:10:43.923 "state": "online", 00:10:43.923 "raid_level": "raid0", 00:10:43.923 "superblock": true, 00:10:43.923 "num_base_bdevs": 4, 00:10:43.924 "num_base_bdevs_discovered": 4, 00:10:43.924 "num_base_bdevs_operational": 4, 00:10:43.924 "base_bdevs_list": [ 00:10:43.924 { 00:10:43.924 "name": "BaseBdev1", 00:10:43.924 "uuid": "3fe2cb95-469c-5284-9fa3-8bd02e5514b7", 00:10:43.924 "is_configured": true, 00:10:43.924 "data_offset": 2048, 00:10:43.924 "data_size": 63488 00:10:43.924 }, 00:10:43.924 { 00:10:43.924 "name": "BaseBdev2", 00:10:43.924 "uuid": "11362f36-6d56-548a-afaa-22ff58867738", 00:10:43.924 "is_configured": true, 00:10:43.924 "data_offset": 2048, 00:10:43.924 "data_size": 63488 00:10:43.924 }, 00:10:43.924 { 00:10:43.924 "name": "BaseBdev3", 00:10:43.924 "uuid": "b4e9250f-d8c3-5c1d-8efd-2a68ca33120c", 00:10:43.924 "is_configured": true, 00:10:43.924 "data_offset": 2048, 00:10:43.924 "data_size": 63488 00:10:43.924 }, 00:10:43.924 { 00:10:43.924 "name": "BaseBdev4", 00:10:43.924 "uuid": "54dde4a9-8d24-562b-836f-3a267bfcb6b9", 00:10:43.924 "is_configured": true, 00:10:43.924 "data_offset": 2048, 00:10:43.924 "data_size": 63488 00:10:43.924 } 00:10:43.924 ] 00:10:43.924 }' 00:10:43.924 15:18:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:43.924 15:18:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.492 15:18:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:44.492 15:18:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:44.492 [2024-11-20 15:18:30.813680] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:10:45.455 15:18:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:10:45.455 15:18:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.455 15:18:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.455 15:18:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.455 15:18:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:45.455 15:18:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:10:45.455 15:18:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:10:45.455 15:18:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:45.455 15:18:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:45.455 15:18:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:45.455 15:18:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:45.455 15:18:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:45.455 15:18:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:45.455 15:18:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:45.455 15:18:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:45.455 15:18:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:45.455 15:18:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:45.455 15:18:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:45.455 15:18:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.455 15:18:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:45.455 15:18:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.455 15:18:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.455 15:18:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:45.455 "name": "raid_bdev1", 00:10:45.455 "uuid": "7e048286-4333-4cd5-b3b2-ba45b15a6702", 00:10:45.455 "strip_size_kb": 64, 00:10:45.455 "state": "online", 00:10:45.455 "raid_level": "raid0", 00:10:45.455 "superblock": true, 00:10:45.455 "num_base_bdevs": 4, 00:10:45.455 "num_base_bdevs_discovered": 4, 00:10:45.455 "num_base_bdevs_operational": 4, 00:10:45.455 "base_bdevs_list": [ 00:10:45.455 { 00:10:45.455 "name": "BaseBdev1", 00:10:45.455 "uuid": "3fe2cb95-469c-5284-9fa3-8bd02e5514b7", 00:10:45.455 "is_configured": true, 00:10:45.455 "data_offset": 2048, 00:10:45.455 "data_size": 63488 00:10:45.455 }, 00:10:45.455 { 00:10:45.455 "name": "BaseBdev2", 00:10:45.455 "uuid": "11362f36-6d56-548a-afaa-22ff58867738", 00:10:45.455 "is_configured": true, 00:10:45.455 "data_offset": 2048, 00:10:45.455 "data_size": 63488 00:10:45.455 }, 00:10:45.455 { 00:10:45.455 "name": "BaseBdev3", 00:10:45.455 "uuid": "b4e9250f-d8c3-5c1d-8efd-2a68ca33120c", 00:10:45.455 "is_configured": true, 00:10:45.455 "data_offset": 2048, 00:10:45.455 "data_size": 63488 00:10:45.455 }, 00:10:45.455 { 00:10:45.455 "name": "BaseBdev4", 00:10:45.455 "uuid": "54dde4a9-8d24-562b-836f-3a267bfcb6b9", 00:10:45.455 "is_configured": true, 00:10:45.455 "data_offset": 2048, 00:10:45.455 "data_size": 63488 00:10:45.455 } 00:10:45.455 ] 00:10:45.455 }' 00:10:45.455 15:18:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:45.455 15:18:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.715 15:18:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:45.715 15:18:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.715 15:18:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.715 [2024-11-20 15:18:32.141796] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:45.715 [2024-11-20 15:18:32.141831] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:45.715 [2024-11-20 15:18:32.144672] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:45.715 [2024-11-20 15:18:32.144738] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:45.715 [2024-11-20 15:18:32.144786] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:45.715 [2024-11-20 15:18:32.144801] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:10:45.715 15:18:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.715 { 00:10:45.715 "results": [ 00:10:45.715 { 00:10:45.715 "job": "raid_bdev1", 00:10:45.715 "core_mask": "0x1", 00:10:45.715 "workload": "randrw", 00:10:45.715 "percentage": 50, 00:10:45.715 "status": "finished", 00:10:45.715 "queue_depth": 1, 00:10:45.715 "io_size": 131072, 00:10:45.715 "runtime": 1.328229, 00:10:45.715 "iops": 15532.713108959373, 00:10:45.715 "mibps": 1941.5891386199216, 00:10:45.715 "io_failed": 1, 00:10:45.715 "io_timeout": 0, 00:10:45.715 "avg_latency_us": 88.89253983751992, 00:10:45.715 "min_latency_us": 26.936546184738955, 00:10:45.715 "max_latency_us": 1414.6827309236949 00:10:45.715 } 00:10:45.715 ], 00:10:45.715 "core_count": 1 00:10:45.715 } 00:10:45.715 15:18:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 70969 00:10:45.716 15:18:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 70969 ']' 00:10:45.716 15:18:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 70969 00:10:45.716 15:18:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:10:45.716 15:18:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:45.716 15:18:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70969 00:10:45.716 15:18:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:45.716 killing process with pid 70969 00:10:45.716 15:18:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:45.716 15:18:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70969' 00:10:45.716 15:18:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 70969 00:10:45.716 15:18:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 70969 00:10:45.716 [2024-11-20 15:18:32.178677] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:46.288 [2024-11-20 15:18:32.515443] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:47.668 15:18:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.kwp6HjHD1u 00:10:47.668 15:18:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:47.668 15:18:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:47.668 15:18:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.75 00:10:47.668 15:18:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:10:47.668 15:18:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:47.668 15:18:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:47.668 15:18:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.75 != \0\.\0\0 ]] 00:10:47.668 00:10:47.668 real 0m4.660s 00:10:47.668 user 0m5.441s 00:10:47.668 sys 0m0.643s 00:10:47.668 15:18:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:47.668 15:18:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.668 ************************************ 00:10:47.668 END TEST raid_write_error_test 00:10:47.668 ************************************ 00:10:47.668 15:18:33 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:10:47.668 15:18:33 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:10:47.668 15:18:33 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:47.668 15:18:33 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:47.668 15:18:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:47.668 ************************************ 00:10:47.668 START TEST raid_state_function_test 00:10:47.668 ************************************ 00:10:47.668 15:18:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 4 false 00:10:47.668 15:18:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:10:47.668 15:18:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:10:47.668 15:18:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:10:47.668 15:18:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:47.668 15:18:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:47.668 15:18:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:47.668 15:18:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:47.668 15:18:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:47.668 15:18:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:47.668 15:18:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:47.668 15:18:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:47.668 15:18:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:47.668 15:18:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:47.668 15:18:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:47.668 15:18:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:47.668 15:18:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:10:47.668 15:18:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:47.668 15:18:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:47.668 15:18:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:47.668 15:18:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:47.668 15:18:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:47.668 15:18:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:47.668 15:18:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:47.668 15:18:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:47.668 15:18:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:10:47.668 15:18:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:47.668 15:18:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:47.668 15:18:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:10:47.668 15:18:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:10:47.668 15:18:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=71112 00:10:47.668 Process raid pid: 71112 00:10:47.668 15:18:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 71112' 00:10:47.668 15:18:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 71112 00:10:47.668 15:18:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 71112 ']' 00:10:47.668 15:18:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:47.668 15:18:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:47.668 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:47.668 15:18:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:47.668 15:18:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:47.668 15:18:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:47.668 15:18:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.668 [2024-11-20 15:18:33.893600] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:10:47.668 [2024-11-20 15:18:33.893748] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:47.668 [2024-11-20 15:18:34.074188] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:47.927 [2024-11-20 15:18:34.189648] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:48.213 [2024-11-20 15:18:34.410843] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:48.213 [2024-11-20 15:18:34.410891] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:48.472 15:18:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:48.472 15:18:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:10:48.472 15:18:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:48.472 15:18:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.472 15:18:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.472 [2024-11-20 15:18:34.768057] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:48.472 [2024-11-20 15:18:34.768117] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:48.472 [2024-11-20 15:18:34.768130] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:48.472 [2024-11-20 15:18:34.768144] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:48.472 [2024-11-20 15:18:34.768152] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:48.472 [2024-11-20 15:18:34.768166] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:48.472 [2024-11-20 15:18:34.768173] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:48.472 [2024-11-20 15:18:34.768186] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:48.472 15:18:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.472 15:18:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:48.472 15:18:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:48.472 15:18:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:48.472 15:18:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:48.472 15:18:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:48.472 15:18:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:48.472 15:18:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:48.472 15:18:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:48.472 15:18:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:48.472 15:18:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:48.472 15:18:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:48.472 15:18:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:48.472 15:18:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.472 15:18:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.472 15:18:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.472 15:18:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:48.472 "name": "Existed_Raid", 00:10:48.472 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:48.472 "strip_size_kb": 64, 00:10:48.472 "state": "configuring", 00:10:48.472 "raid_level": "concat", 00:10:48.472 "superblock": false, 00:10:48.472 "num_base_bdevs": 4, 00:10:48.472 "num_base_bdevs_discovered": 0, 00:10:48.472 "num_base_bdevs_operational": 4, 00:10:48.472 "base_bdevs_list": [ 00:10:48.472 { 00:10:48.472 "name": "BaseBdev1", 00:10:48.472 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:48.472 "is_configured": false, 00:10:48.472 "data_offset": 0, 00:10:48.472 "data_size": 0 00:10:48.472 }, 00:10:48.472 { 00:10:48.472 "name": "BaseBdev2", 00:10:48.472 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:48.472 "is_configured": false, 00:10:48.472 "data_offset": 0, 00:10:48.472 "data_size": 0 00:10:48.472 }, 00:10:48.472 { 00:10:48.472 "name": "BaseBdev3", 00:10:48.472 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:48.472 "is_configured": false, 00:10:48.472 "data_offset": 0, 00:10:48.472 "data_size": 0 00:10:48.472 }, 00:10:48.472 { 00:10:48.472 "name": "BaseBdev4", 00:10:48.472 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:48.472 "is_configured": false, 00:10:48.473 "data_offset": 0, 00:10:48.473 "data_size": 0 00:10:48.473 } 00:10:48.473 ] 00:10:48.473 }' 00:10:48.473 15:18:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:48.473 15:18:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.731 15:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:48.731 15:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.731 15:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.731 [2024-11-20 15:18:35.187472] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:48.731 [2024-11-20 15:18:35.187520] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:48.731 15:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.731 15:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:48.731 15:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.731 15:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.731 [2024-11-20 15:18:35.195453] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:48.731 [2024-11-20 15:18:35.195505] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:48.731 [2024-11-20 15:18:35.195532] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:48.731 [2024-11-20 15:18:35.195546] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:48.731 [2024-11-20 15:18:35.195554] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:48.731 [2024-11-20 15:18:35.195567] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:48.732 [2024-11-20 15:18:35.195575] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:48.732 [2024-11-20 15:18:35.195587] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:48.732 15:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.732 15:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:48.732 15:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.732 15:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.989 [2024-11-20 15:18:35.241582] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:48.989 BaseBdev1 00:10:48.989 15:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.989 15:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:48.989 15:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:48.989 15:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:48.989 15:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:48.989 15:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:48.989 15:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:48.989 15:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:48.989 15:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.989 15:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.989 15:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.989 15:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:48.989 15:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.989 15:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.989 [ 00:10:48.989 { 00:10:48.989 "name": "BaseBdev1", 00:10:48.989 "aliases": [ 00:10:48.989 "aa2cee0a-5828-48aa-a3a7-42c5780b71f1" 00:10:48.989 ], 00:10:48.989 "product_name": "Malloc disk", 00:10:48.989 "block_size": 512, 00:10:48.989 "num_blocks": 65536, 00:10:48.989 "uuid": "aa2cee0a-5828-48aa-a3a7-42c5780b71f1", 00:10:48.989 "assigned_rate_limits": { 00:10:48.989 "rw_ios_per_sec": 0, 00:10:48.989 "rw_mbytes_per_sec": 0, 00:10:48.989 "r_mbytes_per_sec": 0, 00:10:48.989 "w_mbytes_per_sec": 0 00:10:48.989 }, 00:10:48.989 "claimed": true, 00:10:48.989 "claim_type": "exclusive_write", 00:10:48.989 "zoned": false, 00:10:48.989 "supported_io_types": { 00:10:48.989 "read": true, 00:10:48.989 "write": true, 00:10:48.989 "unmap": true, 00:10:48.989 "flush": true, 00:10:48.989 "reset": true, 00:10:48.989 "nvme_admin": false, 00:10:48.989 "nvme_io": false, 00:10:48.990 "nvme_io_md": false, 00:10:48.990 "write_zeroes": true, 00:10:48.990 "zcopy": true, 00:10:48.990 "get_zone_info": false, 00:10:48.990 "zone_management": false, 00:10:48.990 "zone_append": false, 00:10:48.990 "compare": false, 00:10:48.990 "compare_and_write": false, 00:10:48.990 "abort": true, 00:10:48.990 "seek_hole": false, 00:10:48.990 "seek_data": false, 00:10:48.990 "copy": true, 00:10:48.990 "nvme_iov_md": false 00:10:48.990 }, 00:10:48.990 "memory_domains": [ 00:10:48.990 { 00:10:48.990 "dma_device_id": "system", 00:10:48.990 "dma_device_type": 1 00:10:48.990 }, 00:10:48.990 { 00:10:48.990 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:48.990 "dma_device_type": 2 00:10:48.990 } 00:10:48.990 ], 00:10:48.990 "driver_specific": {} 00:10:48.990 } 00:10:48.990 ] 00:10:48.990 15:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.990 15:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:48.990 15:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:48.990 15:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:48.990 15:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:48.990 15:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:48.990 15:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:48.990 15:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:48.990 15:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:48.990 15:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:48.990 15:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:48.990 15:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:48.990 15:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:48.990 15:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.990 15:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.990 15:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:48.990 15:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.990 15:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:48.990 "name": "Existed_Raid", 00:10:48.990 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:48.990 "strip_size_kb": 64, 00:10:48.990 "state": "configuring", 00:10:48.990 "raid_level": "concat", 00:10:48.990 "superblock": false, 00:10:48.990 "num_base_bdevs": 4, 00:10:48.990 "num_base_bdevs_discovered": 1, 00:10:48.990 "num_base_bdevs_operational": 4, 00:10:48.990 "base_bdevs_list": [ 00:10:48.990 { 00:10:48.990 "name": "BaseBdev1", 00:10:48.990 "uuid": "aa2cee0a-5828-48aa-a3a7-42c5780b71f1", 00:10:48.990 "is_configured": true, 00:10:48.990 "data_offset": 0, 00:10:48.990 "data_size": 65536 00:10:48.990 }, 00:10:48.990 { 00:10:48.990 "name": "BaseBdev2", 00:10:48.990 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:48.990 "is_configured": false, 00:10:48.990 "data_offset": 0, 00:10:48.990 "data_size": 0 00:10:48.990 }, 00:10:48.990 { 00:10:48.990 "name": "BaseBdev3", 00:10:48.990 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:48.990 "is_configured": false, 00:10:48.990 "data_offset": 0, 00:10:48.990 "data_size": 0 00:10:48.990 }, 00:10:48.990 { 00:10:48.990 "name": "BaseBdev4", 00:10:48.990 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:48.990 "is_configured": false, 00:10:48.990 "data_offset": 0, 00:10:48.990 "data_size": 0 00:10:48.990 } 00:10:48.990 ] 00:10:48.990 }' 00:10:48.990 15:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:48.990 15:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.249 15:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:49.249 15:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.249 15:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.249 [2024-11-20 15:18:35.681016] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:49.249 [2024-11-20 15:18:35.681080] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:49.249 15:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.249 15:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:49.249 15:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.249 15:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.249 [2024-11-20 15:18:35.689067] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:49.249 [2024-11-20 15:18:35.691190] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:49.249 [2024-11-20 15:18:35.691243] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:49.249 [2024-11-20 15:18:35.691271] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:49.249 [2024-11-20 15:18:35.691287] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:49.249 [2024-11-20 15:18:35.691296] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:49.249 [2024-11-20 15:18:35.691309] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:49.249 15:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.249 15:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:49.249 15:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:49.249 15:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:49.249 15:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:49.249 15:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:49.249 15:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:49.249 15:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:49.249 15:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:49.249 15:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:49.249 15:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:49.249 15:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:49.249 15:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:49.249 15:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:49.249 15:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:49.249 15:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.249 15:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.249 15:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.507 15:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:49.507 "name": "Existed_Raid", 00:10:49.507 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:49.507 "strip_size_kb": 64, 00:10:49.507 "state": "configuring", 00:10:49.507 "raid_level": "concat", 00:10:49.507 "superblock": false, 00:10:49.507 "num_base_bdevs": 4, 00:10:49.507 "num_base_bdevs_discovered": 1, 00:10:49.507 "num_base_bdevs_operational": 4, 00:10:49.507 "base_bdevs_list": [ 00:10:49.507 { 00:10:49.507 "name": "BaseBdev1", 00:10:49.507 "uuid": "aa2cee0a-5828-48aa-a3a7-42c5780b71f1", 00:10:49.507 "is_configured": true, 00:10:49.507 "data_offset": 0, 00:10:49.507 "data_size": 65536 00:10:49.507 }, 00:10:49.507 { 00:10:49.507 "name": "BaseBdev2", 00:10:49.507 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:49.507 "is_configured": false, 00:10:49.507 "data_offset": 0, 00:10:49.507 "data_size": 0 00:10:49.507 }, 00:10:49.507 { 00:10:49.507 "name": "BaseBdev3", 00:10:49.507 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:49.507 "is_configured": false, 00:10:49.507 "data_offset": 0, 00:10:49.508 "data_size": 0 00:10:49.508 }, 00:10:49.508 { 00:10:49.508 "name": "BaseBdev4", 00:10:49.508 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:49.508 "is_configured": false, 00:10:49.508 "data_offset": 0, 00:10:49.508 "data_size": 0 00:10:49.508 } 00:10:49.508 ] 00:10:49.508 }' 00:10:49.508 15:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:49.508 15:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.766 15:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:49.766 15:18:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.766 15:18:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.766 [2024-11-20 15:18:36.140567] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:49.766 BaseBdev2 00:10:49.766 15:18:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.766 15:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:49.766 15:18:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:49.766 15:18:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:49.766 15:18:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:49.766 15:18:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:49.766 15:18:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:49.766 15:18:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:49.766 15:18:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.766 15:18:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.766 15:18:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.766 15:18:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:49.766 15:18:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.766 15:18:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.766 [ 00:10:49.766 { 00:10:49.766 "name": "BaseBdev2", 00:10:49.766 "aliases": [ 00:10:49.766 "0edb80b6-15b5-4431-ae71-4725830f8af6" 00:10:49.766 ], 00:10:49.766 "product_name": "Malloc disk", 00:10:49.766 "block_size": 512, 00:10:49.766 "num_blocks": 65536, 00:10:49.766 "uuid": "0edb80b6-15b5-4431-ae71-4725830f8af6", 00:10:49.766 "assigned_rate_limits": { 00:10:49.766 "rw_ios_per_sec": 0, 00:10:49.766 "rw_mbytes_per_sec": 0, 00:10:49.766 "r_mbytes_per_sec": 0, 00:10:49.766 "w_mbytes_per_sec": 0 00:10:49.766 }, 00:10:49.766 "claimed": true, 00:10:49.766 "claim_type": "exclusive_write", 00:10:49.766 "zoned": false, 00:10:49.766 "supported_io_types": { 00:10:49.766 "read": true, 00:10:49.766 "write": true, 00:10:49.766 "unmap": true, 00:10:49.766 "flush": true, 00:10:49.766 "reset": true, 00:10:49.766 "nvme_admin": false, 00:10:49.766 "nvme_io": false, 00:10:49.766 "nvme_io_md": false, 00:10:49.766 "write_zeroes": true, 00:10:49.766 "zcopy": true, 00:10:49.766 "get_zone_info": false, 00:10:49.766 "zone_management": false, 00:10:49.766 "zone_append": false, 00:10:49.766 "compare": false, 00:10:49.766 "compare_and_write": false, 00:10:49.766 "abort": true, 00:10:49.766 "seek_hole": false, 00:10:49.766 "seek_data": false, 00:10:49.766 "copy": true, 00:10:49.766 "nvme_iov_md": false 00:10:49.766 }, 00:10:49.766 "memory_domains": [ 00:10:49.766 { 00:10:49.766 "dma_device_id": "system", 00:10:49.766 "dma_device_type": 1 00:10:49.766 }, 00:10:49.766 { 00:10:49.766 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:49.766 "dma_device_type": 2 00:10:49.766 } 00:10:49.766 ], 00:10:49.766 "driver_specific": {} 00:10:49.766 } 00:10:49.766 ] 00:10:49.766 15:18:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.766 15:18:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:49.766 15:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:49.766 15:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:49.766 15:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:49.766 15:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:49.766 15:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:49.766 15:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:49.766 15:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:49.766 15:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:49.766 15:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:49.766 15:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:49.766 15:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:49.766 15:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:49.766 15:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:49.766 15:18:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.766 15:18:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.766 15:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:49.766 15:18:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.766 15:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:49.766 "name": "Existed_Raid", 00:10:49.766 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:49.766 "strip_size_kb": 64, 00:10:49.766 "state": "configuring", 00:10:49.766 "raid_level": "concat", 00:10:49.766 "superblock": false, 00:10:49.766 "num_base_bdevs": 4, 00:10:49.766 "num_base_bdevs_discovered": 2, 00:10:49.766 "num_base_bdevs_operational": 4, 00:10:49.766 "base_bdevs_list": [ 00:10:49.766 { 00:10:49.766 "name": "BaseBdev1", 00:10:49.766 "uuid": "aa2cee0a-5828-48aa-a3a7-42c5780b71f1", 00:10:49.766 "is_configured": true, 00:10:49.766 "data_offset": 0, 00:10:49.766 "data_size": 65536 00:10:49.766 }, 00:10:49.766 { 00:10:49.766 "name": "BaseBdev2", 00:10:49.766 "uuid": "0edb80b6-15b5-4431-ae71-4725830f8af6", 00:10:49.766 "is_configured": true, 00:10:49.766 "data_offset": 0, 00:10:49.766 "data_size": 65536 00:10:49.766 }, 00:10:49.766 { 00:10:49.766 "name": "BaseBdev3", 00:10:49.766 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:49.766 "is_configured": false, 00:10:49.766 "data_offset": 0, 00:10:49.766 "data_size": 0 00:10:49.766 }, 00:10:49.766 { 00:10:49.766 "name": "BaseBdev4", 00:10:49.766 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:49.766 "is_configured": false, 00:10:49.766 "data_offset": 0, 00:10:49.766 "data_size": 0 00:10:49.766 } 00:10:49.766 ] 00:10:49.766 }' 00:10:49.766 15:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:49.766 15:18:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.333 15:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:50.333 15:18:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.333 15:18:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.333 [2024-11-20 15:18:36.641794] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:50.333 BaseBdev3 00:10:50.333 15:18:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.333 15:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:50.333 15:18:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:50.333 15:18:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:50.333 15:18:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:50.333 15:18:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:50.333 15:18:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:50.333 15:18:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:50.333 15:18:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.333 15:18:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.333 15:18:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.333 15:18:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:50.333 15:18:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.333 15:18:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.333 [ 00:10:50.333 { 00:10:50.333 "name": "BaseBdev3", 00:10:50.333 "aliases": [ 00:10:50.333 "3d3e484a-0ffc-467a-8c1e-274cd5494cf8" 00:10:50.333 ], 00:10:50.333 "product_name": "Malloc disk", 00:10:50.333 "block_size": 512, 00:10:50.333 "num_blocks": 65536, 00:10:50.333 "uuid": "3d3e484a-0ffc-467a-8c1e-274cd5494cf8", 00:10:50.333 "assigned_rate_limits": { 00:10:50.333 "rw_ios_per_sec": 0, 00:10:50.333 "rw_mbytes_per_sec": 0, 00:10:50.333 "r_mbytes_per_sec": 0, 00:10:50.333 "w_mbytes_per_sec": 0 00:10:50.333 }, 00:10:50.333 "claimed": true, 00:10:50.333 "claim_type": "exclusive_write", 00:10:50.333 "zoned": false, 00:10:50.333 "supported_io_types": { 00:10:50.333 "read": true, 00:10:50.333 "write": true, 00:10:50.333 "unmap": true, 00:10:50.333 "flush": true, 00:10:50.333 "reset": true, 00:10:50.333 "nvme_admin": false, 00:10:50.333 "nvme_io": false, 00:10:50.333 "nvme_io_md": false, 00:10:50.333 "write_zeroes": true, 00:10:50.333 "zcopy": true, 00:10:50.333 "get_zone_info": false, 00:10:50.333 "zone_management": false, 00:10:50.333 "zone_append": false, 00:10:50.333 "compare": false, 00:10:50.333 "compare_and_write": false, 00:10:50.333 "abort": true, 00:10:50.333 "seek_hole": false, 00:10:50.333 "seek_data": false, 00:10:50.333 "copy": true, 00:10:50.333 "nvme_iov_md": false 00:10:50.333 }, 00:10:50.333 "memory_domains": [ 00:10:50.333 { 00:10:50.333 "dma_device_id": "system", 00:10:50.333 "dma_device_type": 1 00:10:50.333 }, 00:10:50.333 { 00:10:50.333 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:50.333 "dma_device_type": 2 00:10:50.333 } 00:10:50.333 ], 00:10:50.333 "driver_specific": {} 00:10:50.333 } 00:10:50.333 ] 00:10:50.333 15:18:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.333 15:18:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:50.333 15:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:50.334 15:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:50.334 15:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:50.334 15:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:50.334 15:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:50.334 15:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:50.334 15:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:50.334 15:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:50.334 15:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:50.334 15:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:50.334 15:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:50.334 15:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:50.334 15:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:50.334 15:18:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.334 15:18:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.334 15:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:50.334 15:18:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.334 15:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:50.334 "name": "Existed_Raid", 00:10:50.334 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:50.334 "strip_size_kb": 64, 00:10:50.334 "state": "configuring", 00:10:50.334 "raid_level": "concat", 00:10:50.334 "superblock": false, 00:10:50.334 "num_base_bdevs": 4, 00:10:50.334 "num_base_bdevs_discovered": 3, 00:10:50.334 "num_base_bdevs_operational": 4, 00:10:50.334 "base_bdevs_list": [ 00:10:50.334 { 00:10:50.334 "name": "BaseBdev1", 00:10:50.334 "uuid": "aa2cee0a-5828-48aa-a3a7-42c5780b71f1", 00:10:50.334 "is_configured": true, 00:10:50.334 "data_offset": 0, 00:10:50.334 "data_size": 65536 00:10:50.334 }, 00:10:50.334 { 00:10:50.334 "name": "BaseBdev2", 00:10:50.334 "uuid": "0edb80b6-15b5-4431-ae71-4725830f8af6", 00:10:50.334 "is_configured": true, 00:10:50.334 "data_offset": 0, 00:10:50.334 "data_size": 65536 00:10:50.334 }, 00:10:50.334 { 00:10:50.334 "name": "BaseBdev3", 00:10:50.334 "uuid": "3d3e484a-0ffc-467a-8c1e-274cd5494cf8", 00:10:50.334 "is_configured": true, 00:10:50.334 "data_offset": 0, 00:10:50.334 "data_size": 65536 00:10:50.334 }, 00:10:50.334 { 00:10:50.334 "name": "BaseBdev4", 00:10:50.334 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:50.334 "is_configured": false, 00:10:50.334 "data_offset": 0, 00:10:50.334 "data_size": 0 00:10:50.334 } 00:10:50.334 ] 00:10:50.334 }' 00:10:50.334 15:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:50.334 15:18:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.900 15:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:50.900 15:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.900 15:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.900 [2024-11-20 15:18:37.156975] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:50.900 [2024-11-20 15:18:37.157036] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:50.900 [2024-11-20 15:18:37.157045] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:10:50.900 [2024-11-20 15:18:37.157357] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:50.900 [2024-11-20 15:18:37.157528] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:50.900 [2024-11-20 15:18:37.157545] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:50.900 [2024-11-20 15:18:37.157843] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:50.900 BaseBdev4 00:10:50.900 15:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.900 15:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:10:50.900 15:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:50.900 15:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:50.900 15:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:50.900 15:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:50.900 15:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:50.900 15:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:50.900 15:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.900 15:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.900 15:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.900 15:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:50.900 15:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.900 15:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.900 [ 00:10:50.900 { 00:10:50.900 "name": "BaseBdev4", 00:10:50.900 "aliases": [ 00:10:50.900 "919bf158-42d5-41ca-8cd0-3923f246b4df" 00:10:50.900 ], 00:10:50.900 "product_name": "Malloc disk", 00:10:50.900 "block_size": 512, 00:10:50.900 "num_blocks": 65536, 00:10:50.900 "uuid": "919bf158-42d5-41ca-8cd0-3923f246b4df", 00:10:50.900 "assigned_rate_limits": { 00:10:50.900 "rw_ios_per_sec": 0, 00:10:50.900 "rw_mbytes_per_sec": 0, 00:10:50.900 "r_mbytes_per_sec": 0, 00:10:50.900 "w_mbytes_per_sec": 0 00:10:50.900 }, 00:10:50.900 "claimed": true, 00:10:50.900 "claim_type": "exclusive_write", 00:10:50.900 "zoned": false, 00:10:50.900 "supported_io_types": { 00:10:50.900 "read": true, 00:10:50.900 "write": true, 00:10:50.900 "unmap": true, 00:10:50.900 "flush": true, 00:10:50.900 "reset": true, 00:10:50.900 "nvme_admin": false, 00:10:50.900 "nvme_io": false, 00:10:50.900 "nvme_io_md": false, 00:10:50.901 "write_zeroes": true, 00:10:50.901 "zcopy": true, 00:10:50.901 "get_zone_info": false, 00:10:50.901 "zone_management": false, 00:10:50.901 "zone_append": false, 00:10:50.901 "compare": false, 00:10:50.901 "compare_and_write": false, 00:10:50.901 "abort": true, 00:10:50.901 "seek_hole": false, 00:10:50.901 "seek_data": false, 00:10:50.901 "copy": true, 00:10:50.901 "nvme_iov_md": false 00:10:50.901 }, 00:10:50.901 "memory_domains": [ 00:10:50.901 { 00:10:50.901 "dma_device_id": "system", 00:10:50.901 "dma_device_type": 1 00:10:50.901 }, 00:10:50.901 { 00:10:50.901 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:50.901 "dma_device_type": 2 00:10:50.901 } 00:10:50.901 ], 00:10:50.901 "driver_specific": {} 00:10:50.901 } 00:10:50.901 ] 00:10:50.901 15:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.901 15:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:50.901 15:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:50.901 15:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:50.901 15:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:10:50.901 15:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:50.901 15:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:50.901 15:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:50.901 15:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:50.901 15:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:50.901 15:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:50.901 15:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:50.901 15:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:50.901 15:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:50.901 15:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:50.901 15:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.901 15:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.901 15:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:50.901 15:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.901 15:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:50.901 "name": "Existed_Raid", 00:10:50.901 "uuid": "7dcef873-27fd-461f-ad93-0364f1a68849", 00:10:50.901 "strip_size_kb": 64, 00:10:50.901 "state": "online", 00:10:50.901 "raid_level": "concat", 00:10:50.901 "superblock": false, 00:10:50.901 "num_base_bdevs": 4, 00:10:50.901 "num_base_bdevs_discovered": 4, 00:10:50.901 "num_base_bdevs_operational": 4, 00:10:50.901 "base_bdevs_list": [ 00:10:50.901 { 00:10:50.901 "name": "BaseBdev1", 00:10:50.901 "uuid": "aa2cee0a-5828-48aa-a3a7-42c5780b71f1", 00:10:50.901 "is_configured": true, 00:10:50.901 "data_offset": 0, 00:10:50.901 "data_size": 65536 00:10:50.901 }, 00:10:50.901 { 00:10:50.901 "name": "BaseBdev2", 00:10:50.901 "uuid": "0edb80b6-15b5-4431-ae71-4725830f8af6", 00:10:50.901 "is_configured": true, 00:10:50.901 "data_offset": 0, 00:10:50.901 "data_size": 65536 00:10:50.901 }, 00:10:50.901 { 00:10:50.901 "name": "BaseBdev3", 00:10:50.901 "uuid": "3d3e484a-0ffc-467a-8c1e-274cd5494cf8", 00:10:50.901 "is_configured": true, 00:10:50.901 "data_offset": 0, 00:10:50.901 "data_size": 65536 00:10:50.901 }, 00:10:50.901 { 00:10:50.901 "name": "BaseBdev4", 00:10:50.901 "uuid": "919bf158-42d5-41ca-8cd0-3923f246b4df", 00:10:50.901 "is_configured": true, 00:10:50.901 "data_offset": 0, 00:10:50.901 "data_size": 65536 00:10:50.901 } 00:10:50.901 ] 00:10:50.901 }' 00:10:50.901 15:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:50.901 15:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.159 15:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:51.159 15:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:51.159 15:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:51.159 15:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:51.159 15:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:51.159 15:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:51.159 15:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:51.159 15:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:51.159 15:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.159 15:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.159 [2024-11-20 15:18:37.612771] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:51.419 15:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.419 15:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:51.419 "name": "Existed_Raid", 00:10:51.419 "aliases": [ 00:10:51.419 "7dcef873-27fd-461f-ad93-0364f1a68849" 00:10:51.419 ], 00:10:51.419 "product_name": "Raid Volume", 00:10:51.419 "block_size": 512, 00:10:51.419 "num_blocks": 262144, 00:10:51.419 "uuid": "7dcef873-27fd-461f-ad93-0364f1a68849", 00:10:51.419 "assigned_rate_limits": { 00:10:51.419 "rw_ios_per_sec": 0, 00:10:51.419 "rw_mbytes_per_sec": 0, 00:10:51.419 "r_mbytes_per_sec": 0, 00:10:51.419 "w_mbytes_per_sec": 0 00:10:51.419 }, 00:10:51.419 "claimed": false, 00:10:51.419 "zoned": false, 00:10:51.419 "supported_io_types": { 00:10:51.419 "read": true, 00:10:51.419 "write": true, 00:10:51.419 "unmap": true, 00:10:51.419 "flush": true, 00:10:51.419 "reset": true, 00:10:51.419 "nvme_admin": false, 00:10:51.419 "nvme_io": false, 00:10:51.419 "nvme_io_md": false, 00:10:51.419 "write_zeroes": true, 00:10:51.419 "zcopy": false, 00:10:51.419 "get_zone_info": false, 00:10:51.419 "zone_management": false, 00:10:51.419 "zone_append": false, 00:10:51.419 "compare": false, 00:10:51.419 "compare_and_write": false, 00:10:51.419 "abort": false, 00:10:51.419 "seek_hole": false, 00:10:51.419 "seek_data": false, 00:10:51.419 "copy": false, 00:10:51.419 "nvme_iov_md": false 00:10:51.419 }, 00:10:51.419 "memory_domains": [ 00:10:51.419 { 00:10:51.419 "dma_device_id": "system", 00:10:51.419 "dma_device_type": 1 00:10:51.419 }, 00:10:51.419 { 00:10:51.419 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:51.419 "dma_device_type": 2 00:10:51.419 }, 00:10:51.419 { 00:10:51.419 "dma_device_id": "system", 00:10:51.419 "dma_device_type": 1 00:10:51.419 }, 00:10:51.419 { 00:10:51.419 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:51.419 "dma_device_type": 2 00:10:51.419 }, 00:10:51.419 { 00:10:51.419 "dma_device_id": "system", 00:10:51.419 "dma_device_type": 1 00:10:51.419 }, 00:10:51.419 { 00:10:51.419 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:51.419 "dma_device_type": 2 00:10:51.419 }, 00:10:51.419 { 00:10:51.419 "dma_device_id": "system", 00:10:51.419 "dma_device_type": 1 00:10:51.419 }, 00:10:51.419 { 00:10:51.419 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:51.419 "dma_device_type": 2 00:10:51.419 } 00:10:51.419 ], 00:10:51.419 "driver_specific": { 00:10:51.419 "raid": { 00:10:51.419 "uuid": "7dcef873-27fd-461f-ad93-0364f1a68849", 00:10:51.419 "strip_size_kb": 64, 00:10:51.419 "state": "online", 00:10:51.419 "raid_level": "concat", 00:10:51.419 "superblock": false, 00:10:51.419 "num_base_bdevs": 4, 00:10:51.419 "num_base_bdevs_discovered": 4, 00:10:51.419 "num_base_bdevs_operational": 4, 00:10:51.419 "base_bdevs_list": [ 00:10:51.419 { 00:10:51.419 "name": "BaseBdev1", 00:10:51.419 "uuid": "aa2cee0a-5828-48aa-a3a7-42c5780b71f1", 00:10:51.419 "is_configured": true, 00:10:51.419 "data_offset": 0, 00:10:51.419 "data_size": 65536 00:10:51.419 }, 00:10:51.419 { 00:10:51.419 "name": "BaseBdev2", 00:10:51.419 "uuid": "0edb80b6-15b5-4431-ae71-4725830f8af6", 00:10:51.419 "is_configured": true, 00:10:51.419 "data_offset": 0, 00:10:51.419 "data_size": 65536 00:10:51.419 }, 00:10:51.419 { 00:10:51.419 "name": "BaseBdev3", 00:10:51.419 "uuid": "3d3e484a-0ffc-467a-8c1e-274cd5494cf8", 00:10:51.419 "is_configured": true, 00:10:51.419 "data_offset": 0, 00:10:51.419 "data_size": 65536 00:10:51.419 }, 00:10:51.419 { 00:10:51.419 "name": "BaseBdev4", 00:10:51.419 "uuid": "919bf158-42d5-41ca-8cd0-3923f246b4df", 00:10:51.419 "is_configured": true, 00:10:51.419 "data_offset": 0, 00:10:51.419 "data_size": 65536 00:10:51.419 } 00:10:51.419 ] 00:10:51.419 } 00:10:51.419 } 00:10:51.419 }' 00:10:51.419 15:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:51.419 15:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:51.419 BaseBdev2 00:10:51.419 BaseBdev3 00:10:51.419 BaseBdev4' 00:10:51.419 15:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:51.419 15:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:51.419 15:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:51.419 15:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:51.419 15:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.419 15:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.419 15:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:51.419 15:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.419 15:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:51.419 15:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:51.419 15:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:51.419 15:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:51.419 15:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:51.419 15:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.419 15:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.419 15:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.419 15:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:51.419 15:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:51.419 15:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:51.419 15:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:51.419 15:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:51.419 15:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.419 15:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.419 15:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.419 15:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:51.419 15:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:51.419 15:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:51.419 15:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:51.419 15:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:51.419 15:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.419 15:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.679 15:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.679 15:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:51.679 15:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:51.679 15:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:51.679 15:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.679 15:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.679 [2024-11-20 15:18:37.928050] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:51.679 [2024-11-20 15:18:37.928090] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:51.679 [2024-11-20 15:18:37.928154] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:51.679 15:18:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.679 15:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:51.679 15:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:10:51.679 15:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:51.679 15:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:51.679 15:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:51.679 15:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:10:51.679 15:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:51.679 15:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:51.679 15:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:51.679 15:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:51.679 15:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:51.679 15:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:51.679 15:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:51.679 15:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:51.679 15:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:51.679 15:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:51.679 15:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:51.679 15:18:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.679 15:18:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.679 15:18:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.679 15:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:51.679 "name": "Existed_Raid", 00:10:51.679 "uuid": "7dcef873-27fd-461f-ad93-0364f1a68849", 00:10:51.679 "strip_size_kb": 64, 00:10:51.679 "state": "offline", 00:10:51.679 "raid_level": "concat", 00:10:51.679 "superblock": false, 00:10:51.679 "num_base_bdevs": 4, 00:10:51.679 "num_base_bdevs_discovered": 3, 00:10:51.679 "num_base_bdevs_operational": 3, 00:10:51.679 "base_bdevs_list": [ 00:10:51.679 { 00:10:51.679 "name": null, 00:10:51.679 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:51.679 "is_configured": false, 00:10:51.679 "data_offset": 0, 00:10:51.679 "data_size": 65536 00:10:51.679 }, 00:10:51.679 { 00:10:51.679 "name": "BaseBdev2", 00:10:51.679 "uuid": "0edb80b6-15b5-4431-ae71-4725830f8af6", 00:10:51.679 "is_configured": true, 00:10:51.679 "data_offset": 0, 00:10:51.679 "data_size": 65536 00:10:51.679 }, 00:10:51.679 { 00:10:51.679 "name": "BaseBdev3", 00:10:51.679 "uuid": "3d3e484a-0ffc-467a-8c1e-274cd5494cf8", 00:10:51.679 "is_configured": true, 00:10:51.679 "data_offset": 0, 00:10:51.679 "data_size": 65536 00:10:51.679 }, 00:10:51.679 { 00:10:51.679 "name": "BaseBdev4", 00:10:51.679 "uuid": "919bf158-42d5-41ca-8cd0-3923f246b4df", 00:10:51.679 "is_configured": true, 00:10:51.679 "data_offset": 0, 00:10:51.679 "data_size": 65536 00:10:51.679 } 00:10:51.679 ] 00:10:51.679 }' 00:10:51.679 15:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:51.679 15:18:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.938 15:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:52.197 15:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:52.197 15:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:52.197 15:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:52.197 15:18:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.197 15:18:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.197 15:18:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.197 15:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:52.197 15:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:52.197 15:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:52.197 15:18:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.197 15:18:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.197 [2024-11-20 15:18:38.454990] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:52.197 15:18:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.197 15:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:52.197 15:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:52.197 15:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:52.197 15:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:52.197 15:18:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.197 15:18:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.197 15:18:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.197 15:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:52.197 15:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:52.197 15:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:52.197 15:18:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.197 15:18:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.197 [2024-11-20 15:18:38.604231] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:52.455 15:18:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.455 15:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:52.455 15:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:52.455 15:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:52.455 15:18:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.455 15:18:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.455 15:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:52.455 15:18:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.455 15:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:52.455 15:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:52.455 15:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:10:52.455 15:18:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.455 15:18:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.455 [2024-11-20 15:18:38.759494] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:10:52.455 [2024-11-20 15:18:38.759546] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:52.455 15:18:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.455 15:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:52.456 15:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:52.456 15:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:52.456 15:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:52.456 15:18:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.456 15:18:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.456 15:18:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.456 15:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:52.456 15:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:52.456 15:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:10:52.456 15:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:52.456 15:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:52.456 15:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:52.456 15:18:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.456 15:18:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.714 BaseBdev2 00:10:52.714 15:18:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.714 15:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:52.714 15:18:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:52.714 15:18:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:52.714 15:18:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:52.714 15:18:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:52.714 15:18:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:52.714 15:18:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:52.714 15:18:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.714 15:18:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.714 15:18:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.714 15:18:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:52.715 15:18:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.715 15:18:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.715 [ 00:10:52.715 { 00:10:52.715 "name": "BaseBdev2", 00:10:52.715 "aliases": [ 00:10:52.715 "f403ff65-f601-47ab-bb77-17e35a393deb" 00:10:52.715 ], 00:10:52.715 "product_name": "Malloc disk", 00:10:52.715 "block_size": 512, 00:10:52.715 "num_blocks": 65536, 00:10:52.715 "uuid": "f403ff65-f601-47ab-bb77-17e35a393deb", 00:10:52.715 "assigned_rate_limits": { 00:10:52.715 "rw_ios_per_sec": 0, 00:10:52.715 "rw_mbytes_per_sec": 0, 00:10:52.715 "r_mbytes_per_sec": 0, 00:10:52.715 "w_mbytes_per_sec": 0 00:10:52.715 }, 00:10:52.715 "claimed": false, 00:10:52.715 "zoned": false, 00:10:52.715 "supported_io_types": { 00:10:52.715 "read": true, 00:10:52.715 "write": true, 00:10:52.715 "unmap": true, 00:10:52.715 "flush": true, 00:10:52.715 "reset": true, 00:10:52.715 "nvme_admin": false, 00:10:52.715 "nvme_io": false, 00:10:52.715 "nvme_io_md": false, 00:10:52.715 "write_zeroes": true, 00:10:52.715 "zcopy": true, 00:10:52.715 "get_zone_info": false, 00:10:52.715 "zone_management": false, 00:10:52.715 "zone_append": false, 00:10:52.715 "compare": false, 00:10:52.715 "compare_and_write": false, 00:10:52.715 "abort": true, 00:10:52.715 "seek_hole": false, 00:10:52.715 "seek_data": false, 00:10:52.715 "copy": true, 00:10:52.715 "nvme_iov_md": false 00:10:52.715 }, 00:10:52.715 "memory_domains": [ 00:10:52.715 { 00:10:52.715 "dma_device_id": "system", 00:10:52.715 "dma_device_type": 1 00:10:52.715 }, 00:10:52.715 { 00:10:52.715 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:52.715 "dma_device_type": 2 00:10:52.715 } 00:10:52.715 ], 00:10:52.715 "driver_specific": {} 00:10:52.715 } 00:10:52.715 ] 00:10:52.715 15:18:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.715 15:18:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:52.715 15:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:52.715 15:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:52.715 15:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:52.715 15:18:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.715 15:18:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.715 BaseBdev3 00:10:52.715 15:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.715 15:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:52.715 15:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:52.715 15:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:52.715 15:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:52.715 15:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:52.715 15:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:52.715 15:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:52.715 15:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.715 15:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.715 15:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.715 15:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:52.715 15:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.715 15:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.715 [ 00:10:52.715 { 00:10:52.715 "name": "BaseBdev3", 00:10:52.715 "aliases": [ 00:10:52.715 "208b333b-91a6-4697-be15-f7ec2bbb032c" 00:10:52.715 ], 00:10:52.715 "product_name": "Malloc disk", 00:10:52.715 "block_size": 512, 00:10:52.715 "num_blocks": 65536, 00:10:52.715 "uuid": "208b333b-91a6-4697-be15-f7ec2bbb032c", 00:10:52.715 "assigned_rate_limits": { 00:10:52.715 "rw_ios_per_sec": 0, 00:10:52.715 "rw_mbytes_per_sec": 0, 00:10:52.715 "r_mbytes_per_sec": 0, 00:10:52.715 "w_mbytes_per_sec": 0 00:10:52.715 }, 00:10:52.715 "claimed": false, 00:10:52.715 "zoned": false, 00:10:52.715 "supported_io_types": { 00:10:52.715 "read": true, 00:10:52.715 "write": true, 00:10:52.715 "unmap": true, 00:10:52.715 "flush": true, 00:10:52.715 "reset": true, 00:10:52.715 "nvme_admin": false, 00:10:52.715 "nvme_io": false, 00:10:52.715 "nvme_io_md": false, 00:10:52.715 "write_zeroes": true, 00:10:52.715 "zcopy": true, 00:10:52.715 "get_zone_info": false, 00:10:52.715 "zone_management": false, 00:10:52.715 "zone_append": false, 00:10:52.715 "compare": false, 00:10:52.715 "compare_and_write": false, 00:10:52.715 "abort": true, 00:10:52.715 "seek_hole": false, 00:10:52.715 "seek_data": false, 00:10:52.715 "copy": true, 00:10:52.715 "nvme_iov_md": false 00:10:52.715 }, 00:10:52.715 "memory_domains": [ 00:10:52.715 { 00:10:52.715 "dma_device_id": "system", 00:10:52.715 "dma_device_type": 1 00:10:52.715 }, 00:10:52.715 { 00:10:52.715 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:52.715 "dma_device_type": 2 00:10:52.715 } 00:10:52.715 ], 00:10:52.715 "driver_specific": {} 00:10:52.715 } 00:10:52.715 ] 00:10:52.715 15:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.715 15:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:52.715 15:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:52.715 15:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:52.715 15:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:52.715 15:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.715 15:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.715 BaseBdev4 00:10:52.715 15:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.715 15:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:10:52.715 15:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:52.715 15:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:52.715 15:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:52.715 15:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:52.715 15:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:52.715 15:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:52.715 15:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.715 15:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.715 15:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.715 15:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:52.715 15:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.716 15:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.716 [ 00:10:52.716 { 00:10:52.716 "name": "BaseBdev4", 00:10:52.716 "aliases": [ 00:10:52.716 "ca43df63-2390-4970-98b7-436313ef9e91" 00:10:52.716 ], 00:10:52.716 "product_name": "Malloc disk", 00:10:52.716 "block_size": 512, 00:10:52.716 "num_blocks": 65536, 00:10:52.716 "uuid": "ca43df63-2390-4970-98b7-436313ef9e91", 00:10:52.716 "assigned_rate_limits": { 00:10:52.716 "rw_ios_per_sec": 0, 00:10:52.716 "rw_mbytes_per_sec": 0, 00:10:52.716 "r_mbytes_per_sec": 0, 00:10:52.716 "w_mbytes_per_sec": 0 00:10:52.716 }, 00:10:52.716 "claimed": false, 00:10:52.716 "zoned": false, 00:10:52.716 "supported_io_types": { 00:10:52.716 "read": true, 00:10:52.716 "write": true, 00:10:52.716 "unmap": true, 00:10:52.716 "flush": true, 00:10:52.716 "reset": true, 00:10:52.716 "nvme_admin": false, 00:10:52.716 "nvme_io": false, 00:10:52.716 "nvme_io_md": false, 00:10:52.716 "write_zeroes": true, 00:10:52.716 "zcopy": true, 00:10:52.716 "get_zone_info": false, 00:10:52.716 "zone_management": false, 00:10:52.716 "zone_append": false, 00:10:52.716 "compare": false, 00:10:52.716 "compare_and_write": false, 00:10:52.716 "abort": true, 00:10:52.716 "seek_hole": false, 00:10:52.716 "seek_data": false, 00:10:52.716 "copy": true, 00:10:52.716 "nvme_iov_md": false 00:10:52.716 }, 00:10:52.716 "memory_domains": [ 00:10:52.716 { 00:10:52.716 "dma_device_id": "system", 00:10:52.716 "dma_device_type": 1 00:10:52.716 }, 00:10:52.716 { 00:10:52.716 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:52.716 "dma_device_type": 2 00:10:52.716 } 00:10:52.716 ], 00:10:52.716 "driver_specific": {} 00:10:52.716 } 00:10:52.716 ] 00:10:52.716 15:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.716 15:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:52.716 15:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:52.716 15:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:52.716 15:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:52.716 15:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.716 15:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.716 [2024-11-20 15:18:39.160625] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:52.716 [2024-11-20 15:18:39.160854] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:52.716 [2024-11-20 15:18:39.160994] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:52.716 [2024-11-20 15:18:39.163365] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:52.716 [2024-11-20 15:18:39.163559] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:52.716 15:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.716 15:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:52.716 15:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:52.716 15:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:52.716 15:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:52.716 15:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:52.716 15:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:52.716 15:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:52.716 15:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:52.716 15:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:52.716 15:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:52.716 15:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:52.716 15:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:52.716 15:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.716 15:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.088 15:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.088 15:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:53.088 "name": "Existed_Raid", 00:10:53.088 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:53.088 "strip_size_kb": 64, 00:10:53.088 "state": "configuring", 00:10:53.088 "raid_level": "concat", 00:10:53.088 "superblock": false, 00:10:53.088 "num_base_bdevs": 4, 00:10:53.088 "num_base_bdevs_discovered": 3, 00:10:53.088 "num_base_bdevs_operational": 4, 00:10:53.088 "base_bdevs_list": [ 00:10:53.088 { 00:10:53.088 "name": "BaseBdev1", 00:10:53.088 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:53.088 "is_configured": false, 00:10:53.088 "data_offset": 0, 00:10:53.088 "data_size": 0 00:10:53.088 }, 00:10:53.088 { 00:10:53.088 "name": "BaseBdev2", 00:10:53.088 "uuid": "f403ff65-f601-47ab-bb77-17e35a393deb", 00:10:53.088 "is_configured": true, 00:10:53.088 "data_offset": 0, 00:10:53.088 "data_size": 65536 00:10:53.088 }, 00:10:53.088 { 00:10:53.088 "name": "BaseBdev3", 00:10:53.088 "uuid": "208b333b-91a6-4697-be15-f7ec2bbb032c", 00:10:53.088 "is_configured": true, 00:10:53.088 "data_offset": 0, 00:10:53.088 "data_size": 65536 00:10:53.088 }, 00:10:53.088 { 00:10:53.088 "name": "BaseBdev4", 00:10:53.088 "uuid": "ca43df63-2390-4970-98b7-436313ef9e91", 00:10:53.088 "is_configured": true, 00:10:53.088 "data_offset": 0, 00:10:53.088 "data_size": 65536 00:10:53.088 } 00:10:53.088 ] 00:10:53.088 }' 00:10:53.088 15:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:53.088 15:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.360 15:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:53.360 15:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.360 15:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.360 [2024-11-20 15:18:39.580018] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:53.360 15:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.360 15:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:53.360 15:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:53.360 15:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:53.360 15:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:53.360 15:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:53.360 15:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:53.360 15:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:53.360 15:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:53.360 15:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:53.360 15:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:53.360 15:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:53.360 15:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:53.360 15:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.360 15:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.360 15:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.360 15:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:53.360 "name": "Existed_Raid", 00:10:53.360 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:53.360 "strip_size_kb": 64, 00:10:53.360 "state": "configuring", 00:10:53.360 "raid_level": "concat", 00:10:53.360 "superblock": false, 00:10:53.360 "num_base_bdevs": 4, 00:10:53.360 "num_base_bdevs_discovered": 2, 00:10:53.360 "num_base_bdevs_operational": 4, 00:10:53.360 "base_bdevs_list": [ 00:10:53.360 { 00:10:53.360 "name": "BaseBdev1", 00:10:53.360 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:53.360 "is_configured": false, 00:10:53.360 "data_offset": 0, 00:10:53.360 "data_size": 0 00:10:53.360 }, 00:10:53.360 { 00:10:53.360 "name": null, 00:10:53.360 "uuid": "f403ff65-f601-47ab-bb77-17e35a393deb", 00:10:53.360 "is_configured": false, 00:10:53.360 "data_offset": 0, 00:10:53.360 "data_size": 65536 00:10:53.360 }, 00:10:53.360 { 00:10:53.360 "name": "BaseBdev3", 00:10:53.360 "uuid": "208b333b-91a6-4697-be15-f7ec2bbb032c", 00:10:53.360 "is_configured": true, 00:10:53.360 "data_offset": 0, 00:10:53.360 "data_size": 65536 00:10:53.360 }, 00:10:53.360 { 00:10:53.360 "name": "BaseBdev4", 00:10:53.360 "uuid": "ca43df63-2390-4970-98b7-436313ef9e91", 00:10:53.360 "is_configured": true, 00:10:53.360 "data_offset": 0, 00:10:53.360 "data_size": 65536 00:10:53.360 } 00:10:53.360 ] 00:10:53.360 }' 00:10:53.360 15:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:53.360 15:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.622 15:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:53.622 15:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:53.622 15:18:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.622 15:18:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.622 15:18:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.622 15:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:53.622 15:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:53.622 15:18:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.622 15:18:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.888 [2024-11-20 15:18:40.114117] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:53.888 BaseBdev1 00:10:53.888 15:18:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.888 15:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:53.888 15:18:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:53.888 15:18:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:53.888 15:18:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:53.888 15:18:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:53.888 15:18:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:53.888 15:18:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:53.888 15:18:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.888 15:18:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.888 15:18:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.888 15:18:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:53.888 15:18:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.888 15:18:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.888 [ 00:10:53.888 { 00:10:53.888 "name": "BaseBdev1", 00:10:53.888 "aliases": [ 00:10:53.888 "028873fc-a99c-41b7-9f9c-4c4acf22d0f0" 00:10:53.888 ], 00:10:53.888 "product_name": "Malloc disk", 00:10:53.888 "block_size": 512, 00:10:53.889 "num_blocks": 65536, 00:10:53.889 "uuid": "028873fc-a99c-41b7-9f9c-4c4acf22d0f0", 00:10:53.889 "assigned_rate_limits": { 00:10:53.889 "rw_ios_per_sec": 0, 00:10:53.889 "rw_mbytes_per_sec": 0, 00:10:53.889 "r_mbytes_per_sec": 0, 00:10:53.889 "w_mbytes_per_sec": 0 00:10:53.889 }, 00:10:53.889 "claimed": true, 00:10:53.889 "claim_type": "exclusive_write", 00:10:53.889 "zoned": false, 00:10:53.889 "supported_io_types": { 00:10:53.889 "read": true, 00:10:53.889 "write": true, 00:10:53.889 "unmap": true, 00:10:53.889 "flush": true, 00:10:53.889 "reset": true, 00:10:53.889 "nvme_admin": false, 00:10:53.889 "nvme_io": false, 00:10:53.889 "nvme_io_md": false, 00:10:53.889 "write_zeroes": true, 00:10:53.889 "zcopy": true, 00:10:53.889 "get_zone_info": false, 00:10:53.889 "zone_management": false, 00:10:53.889 "zone_append": false, 00:10:53.889 "compare": false, 00:10:53.889 "compare_and_write": false, 00:10:53.889 "abort": true, 00:10:53.889 "seek_hole": false, 00:10:53.889 "seek_data": false, 00:10:53.889 "copy": true, 00:10:53.889 "nvme_iov_md": false 00:10:53.889 }, 00:10:53.889 "memory_domains": [ 00:10:53.889 { 00:10:53.889 "dma_device_id": "system", 00:10:53.889 "dma_device_type": 1 00:10:53.889 }, 00:10:53.889 { 00:10:53.889 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:53.889 "dma_device_type": 2 00:10:53.889 } 00:10:53.889 ], 00:10:53.889 "driver_specific": {} 00:10:53.889 } 00:10:53.889 ] 00:10:53.889 15:18:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.889 15:18:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:53.889 15:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:53.889 15:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:53.889 15:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:53.889 15:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:53.889 15:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:53.889 15:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:53.889 15:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:53.889 15:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:53.889 15:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:53.889 15:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:53.889 15:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:53.889 15:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:53.889 15:18:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.889 15:18:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.889 15:18:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.889 15:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:53.889 "name": "Existed_Raid", 00:10:53.889 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:53.889 "strip_size_kb": 64, 00:10:53.889 "state": "configuring", 00:10:53.889 "raid_level": "concat", 00:10:53.889 "superblock": false, 00:10:53.889 "num_base_bdevs": 4, 00:10:53.889 "num_base_bdevs_discovered": 3, 00:10:53.889 "num_base_bdevs_operational": 4, 00:10:53.889 "base_bdevs_list": [ 00:10:53.889 { 00:10:53.889 "name": "BaseBdev1", 00:10:53.889 "uuid": "028873fc-a99c-41b7-9f9c-4c4acf22d0f0", 00:10:53.889 "is_configured": true, 00:10:53.889 "data_offset": 0, 00:10:53.889 "data_size": 65536 00:10:53.889 }, 00:10:53.889 { 00:10:53.889 "name": null, 00:10:53.889 "uuid": "f403ff65-f601-47ab-bb77-17e35a393deb", 00:10:53.889 "is_configured": false, 00:10:53.889 "data_offset": 0, 00:10:53.889 "data_size": 65536 00:10:53.889 }, 00:10:53.889 { 00:10:53.889 "name": "BaseBdev3", 00:10:53.889 "uuid": "208b333b-91a6-4697-be15-f7ec2bbb032c", 00:10:53.889 "is_configured": true, 00:10:53.889 "data_offset": 0, 00:10:53.889 "data_size": 65536 00:10:53.889 }, 00:10:53.889 { 00:10:53.889 "name": "BaseBdev4", 00:10:53.889 "uuid": "ca43df63-2390-4970-98b7-436313ef9e91", 00:10:53.889 "is_configured": true, 00:10:53.889 "data_offset": 0, 00:10:53.889 "data_size": 65536 00:10:53.889 } 00:10:53.889 ] 00:10:53.889 }' 00:10:53.889 15:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:53.889 15:18:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.148 15:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:54.148 15:18:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.148 15:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:54.148 15:18:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.148 15:18:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.148 15:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:54.148 15:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:54.148 15:18:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.148 15:18:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.148 [2024-11-20 15:18:40.593721] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:54.148 15:18:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.148 15:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:54.148 15:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:54.148 15:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:54.148 15:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:54.148 15:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:54.148 15:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:54.148 15:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:54.148 15:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:54.148 15:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:54.148 15:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:54.148 15:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:54.148 15:18:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.148 15:18:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.148 15:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:54.148 15:18:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.407 15:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:54.407 "name": "Existed_Raid", 00:10:54.407 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:54.407 "strip_size_kb": 64, 00:10:54.407 "state": "configuring", 00:10:54.407 "raid_level": "concat", 00:10:54.407 "superblock": false, 00:10:54.407 "num_base_bdevs": 4, 00:10:54.407 "num_base_bdevs_discovered": 2, 00:10:54.407 "num_base_bdevs_operational": 4, 00:10:54.407 "base_bdevs_list": [ 00:10:54.407 { 00:10:54.407 "name": "BaseBdev1", 00:10:54.407 "uuid": "028873fc-a99c-41b7-9f9c-4c4acf22d0f0", 00:10:54.407 "is_configured": true, 00:10:54.407 "data_offset": 0, 00:10:54.407 "data_size": 65536 00:10:54.407 }, 00:10:54.407 { 00:10:54.407 "name": null, 00:10:54.407 "uuid": "f403ff65-f601-47ab-bb77-17e35a393deb", 00:10:54.407 "is_configured": false, 00:10:54.407 "data_offset": 0, 00:10:54.407 "data_size": 65536 00:10:54.407 }, 00:10:54.407 { 00:10:54.407 "name": null, 00:10:54.407 "uuid": "208b333b-91a6-4697-be15-f7ec2bbb032c", 00:10:54.407 "is_configured": false, 00:10:54.407 "data_offset": 0, 00:10:54.407 "data_size": 65536 00:10:54.407 }, 00:10:54.407 { 00:10:54.407 "name": "BaseBdev4", 00:10:54.407 "uuid": "ca43df63-2390-4970-98b7-436313ef9e91", 00:10:54.407 "is_configured": true, 00:10:54.407 "data_offset": 0, 00:10:54.407 "data_size": 65536 00:10:54.407 } 00:10:54.407 ] 00:10:54.407 }' 00:10:54.407 15:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:54.407 15:18:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.666 15:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:54.667 15:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:54.667 15:18:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.667 15:18:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.667 15:18:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.667 15:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:54.667 15:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:54.667 15:18:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.667 15:18:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.667 [2024-11-20 15:18:41.077019] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:54.667 15:18:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.667 15:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:54.667 15:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:54.667 15:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:54.667 15:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:54.667 15:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:54.667 15:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:54.667 15:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:54.667 15:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:54.667 15:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:54.667 15:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:54.667 15:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:54.667 15:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:54.667 15:18:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.667 15:18:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.667 15:18:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.667 15:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:54.667 "name": "Existed_Raid", 00:10:54.667 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:54.667 "strip_size_kb": 64, 00:10:54.667 "state": "configuring", 00:10:54.667 "raid_level": "concat", 00:10:54.667 "superblock": false, 00:10:54.667 "num_base_bdevs": 4, 00:10:54.667 "num_base_bdevs_discovered": 3, 00:10:54.667 "num_base_bdevs_operational": 4, 00:10:54.667 "base_bdevs_list": [ 00:10:54.667 { 00:10:54.667 "name": "BaseBdev1", 00:10:54.667 "uuid": "028873fc-a99c-41b7-9f9c-4c4acf22d0f0", 00:10:54.667 "is_configured": true, 00:10:54.667 "data_offset": 0, 00:10:54.667 "data_size": 65536 00:10:54.667 }, 00:10:54.667 { 00:10:54.667 "name": null, 00:10:54.667 "uuid": "f403ff65-f601-47ab-bb77-17e35a393deb", 00:10:54.667 "is_configured": false, 00:10:54.667 "data_offset": 0, 00:10:54.667 "data_size": 65536 00:10:54.667 }, 00:10:54.667 { 00:10:54.667 "name": "BaseBdev3", 00:10:54.667 "uuid": "208b333b-91a6-4697-be15-f7ec2bbb032c", 00:10:54.667 "is_configured": true, 00:10:54.667 "data_offset": 0, 00:10:54.667 "data_size": 65536 00:10:54.667 }, 00:10:54.667 { 00:10:54.667 "name": "BaseBdev4", 00:10:54.667 "uuid": "ca43df63-2390-4970-98b7-436313ef9e91", 00:10:54.667 "is_configured": true, 00:10:54.667 "data_offset": 0, 00:10:54.667 "data_size": 65536 00:10:54.667 } 00:10:54.667 ] 00:10:54.667 }' 00:10:54.667 15:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:54.667 15:18:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.233 15:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:55.233 15:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:55.233 15:18:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.233 15:18:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.233 15:18:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.233 15:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:55.233 15:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:55.233 15:18:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.233 15:18:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.233 [2024-11-20 15:18:41.572381] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:55.233 15:18:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.233 15:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:55.233 15:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:55.233 15:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:55.233 15:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:55.233 15:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:55.233 15:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:55.233 15:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:55.233 15:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:55.233 15:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:55.233 15:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:55.233 15:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:55.233 15:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:55.233 15:18:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.233 15:18:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.233 15:18:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.491 15:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:55.491 "name": "Existed_Raid", 00:10:55.491 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:55.491 "strip_size_kb": 64, 00:10:55.491 "state": "configuring", 00:10:55.491 "raid_level": "concat", 00:10:55.491 "superblock": false, 00:10:55.491 "num_base_bdevs": 4, 00:10:55.491 "num_base_bdevs_discovered": 2, 00:10:55.491 "num_base_bdevs_operational": 4, 00:10:55.491 "base_bdevs_list": [ 00:10:55.491 { 00:10:55.491 "name": null, 00:10:55.491 "uuid": "028873fc-a99c-41b7-9f9c-4c4acf22d0f0", 00:10:55.491 "is_configured": false, 00:10:55.491 "data_offset": 0, 00:10:55.491 "data_size": 65536 00:10:55.491 }, 00:10:55.491 { 00:10:55.491 "name": null, 00:10:55.491 "uuid": "f403ff65-f601-47ab-bb77-17e35a393deb", 00:10:55.491 "is_configured": false, 00:10:55.491 "data_offset": 0, 00:10:55.491 "data_size": 65536 00:10:55.491 }, 00:10:55.491 { 00:10:55.491 "name": "BaseBdev3", 00:10:55.491 "uuid": "208b333b-91a6-4697-be15-f7ec2bbb032c", 00:10:55.491 "is_configured": true, 00:10:55.491 "data_offset": 0, 00:10:55.491 "data_size": 65536 00:10:55.491 }, 00:10:55.491 { 00:10:55.491 "name": "BaseBdev4", 00:10:55.491 "uuid": "ca43df63-2390-4970-98b7-436313ef9e91", 00:10:55.491 "is_configured": true, 00:10:55.491 "data_offset": 0, 00:10:55.491 "data_size": 65536 00:10:55.491 } 00:10:55.491 ] 00:10:55.491 }' 00:10:55.491 15:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:55.491 15:18:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.751 15:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:55.751 15:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:55.751 15:18:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.751 15:18:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.751 15:18:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.751 15:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:55.751 15:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:55.751 15:18:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.751 15:18:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.751 [2024-11-20 15:18:42.153529] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:55.751 15:18:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.751 15:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:55.751 15:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:55.751 15:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:55.751 15:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:55.751 15:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:55.751 15:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:55.751 15:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:55.751 15:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:55.751 15:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:55.751 15:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:55.751 15:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:55.751 15:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:55.751 15:18:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.751 15:18:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.751 15:18:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.751 15:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:55.751 "name": "Existed_Raid", 00:10:55.751 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:55.751 "strip_size_kb": 64, 00:10:55.751 "state": "configuring", 00:10:55.751 "raid_level": "concat", 00:10:55.751 "superblock": false, 00:10:55.751 "num_base_bdevs": 4, 00:10:55.751 "num_base_bdevs_discovered": 3, 00:10:55.751 "num_base_bdevs_operational": 4, 00:10:55.751 "base_bdevs_list": [ 00:10:55.751 { 00:10:55.751 "name": null, 00:10:55.751 "uuid": "028873fc-a99c-41b7-9f9c-4c4acf22d0f0", 00:10:55.751 "is_configured": false, 00:10:55.751 "data_offset": 0, 00:10:55.751 "data_size": 65536 00:10:55.751 }, 00:10:55.751 { 00:10:55.751 "name": "BaseBdev2", 00:10:55.751 "uuid": "f403ff65-f601-47ab-bb77-17e35a393deb", 00:10:55.751 "is_configured": true, 00:10:55.751 "data_offset": 0, 00:10:55.751 "data_size": 65536 00:10:55.751 }, 00:10:55.751 { 00:10:55.751 "name": "BaseBdev3", 00:10:55.751 "uuid": "208b333b-91a6-4697-be15-f7ec2bbb032c", 00:10:55.751 "is_configured": true, 00:10:55.751 "data_offset": 0, 00:10:55.751 "data_size": 65536 00:10:55.751 }, 00:10:55.751 { 00:10:55.751 "name": "BaseBdev4", 00:10:55.751 "uuid": "ca43df63-2390-4970-98b7-436313ef9e91", 00:10:55.751 "is_configured": true, 00:10:55.751 "data_offset": 0, 00:10:55.751 "data_size": 65536 00:10:55.751 } 00:10:55.751 ] 00:10:55.751 }' 00:10:55.751 15:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:55.751 15:18:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.319 15:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:56.319 15:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:56.319 15:18:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.319 15:18:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.319 15:18:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.319 15:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:56.319 15:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:56.319 15:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:56.319 15:18:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.319 15:18:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.319 15:18:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.319 15:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 028873fc-a99c-41b7-9f9c-4c4acf22d0f0 00:10:56.319 15:18:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.319 15:18:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.319 [2024-11-20 15:18:42.667338] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:56.319 [2024-11-20 15:18:42.667391] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:56.319 [2024-11-20 15:18:42.667400] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:10:56.319 [2024-11-20 15:18:42.667710] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:10:56.319 [2024-11-20 15:18:42.667851] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:56.319 [2024-11-20 15:18:42.667866] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:10:56.319 [2024-11-20 15:18:42.668126] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:56.319 NewBaseBdev 00:10:56.319 15:18:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.319 15:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:56.319 15:18:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:10:56.319 15:18:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:56.319 15:18:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:56.319 15:18:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:56.319 15:18:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:56.319 15:18:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:56.319 15:18:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.319 15:18:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.319 15:18:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.319 15:18:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:56.319 15:18:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.319 15:18:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.319 [ 00:10:56.319 { 00:10:56.319 "name": "NewBaseBdev", 00:10:56.319 "aliases": [ 00:10:56.319 "028873fc-a99c-41b7-9f9c-4c4acf22d0f0" 00:10:56.319 ], 00:10:56.319 "product_name": "Malloc disk", 00:10:56.319 "block_size": 512, 00:10:56.319 "num_blocks": 65536, 00:10:56.319 "uuid": "028873fc-a99c-41b7-9f9c-4c4acf22d0f0", 00:10:56.319 "assigned_rate_limits": { 00:10:56.319 "rw_ios_per_sec": 0, 00:10:56.319 "rw_mbytes_per_sec": 0, 00:10:56.319 "r_mbytes_per_sec": 0, 00:10:56.319 "w_mbytes_per_sec": 0 00:10:56.319 }, 00:10:56.319 "claimed": true, 00:10:56.319 "claim_type": "exclusive_write", 00:10:56.319 "zoned": false, 00:10:56.319 "supported_io_types": { 00:10:56.319 "read": true, 00:10:56.319 "write": true, 00:10:56.319 "unmap": true, 00:10:56.319 "flush": true, 00:10:56.319 "reset": true, 00:10:56.319 "nvme_admin": false, 00:10:56.319 "nvme_io": false, 00:10:56.319 "nvme_io_md": false, 00:10:56.319 "write_zeroes": true, 00:10:56.319 "zcopy": true, 00:10:56.319 "get_zone_info": false, 00:10:56.319 "zone_management": false, 00:10:56.319 "zone_append": false, 00:10:56.319 "compare": false, 00:10:56.319 "compare_and_write": false, 00:10:56.319 "abort": true, 00:10:56.319 "seek_hole": false, 00:10:56.319 "seek_data": false, 00:10:56.319 "copy": true, 00:10:56.319 "nvme_iov_md": false 00:10:56.319 }, 00:10:56.319 "memory_domains": [ 00:10:56.319 { 00:10:56.319 "dma_device_id": "system", 00:10:56.319 "dma_device_type": 1 00:10:56.319 }, 00:10:56.319 { 00:10:56.319 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:56.319 "dma_device_type": 2 00:10:56.319 } 00:10:56.319 ], 00:10:56.319 "driver_specific": {} 00:10:56.319 } 00:10:56.319 ] 00:10:56.319 15:18:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.319 15:18:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:56.319 15:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:10:56.319 15:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:56.319 15:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:56.319 15:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:56.319 15:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:56.319 15:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:56.319 15:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:56.319 15:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:56.319 15:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:56.320 15:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:56.320 15:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:56.320 15:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:56.320 15:18:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.320 15:18:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.320 15:18:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.320 15:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:56.320 "name": "Existed_Raid", 00:10:56.320 "uuid": "074bdc87-6cd1-4038-bb60-3c209a5029ff", 00:10:56.320 "strip_size_kb": 64, 00:10:56.320 "state": "online", 00:10:56.320 "raid_level": "concat", 00:10:56.320 "superblock": false, 00:10:56.320 "num_base_bdevs": 4, 00:10:56.320 "num_base_bdevs_discovered": 4, 00:10:56.320 "num_base_bdevs_operational": 4, 00:10:56.320 "base_bdevs_list": [ 00:10:56.320 { 00:10:56.320 "name": "NewBaseBdev", 00:10:56.320 "uuid": "028873fc-a99c-41b7-9f9c-4c4acf22d0f0", 00:10:56.320 "is_configured": true, 00:10:56.320 "data_offset": 0, 00:10:56.320 "data_size": 65536 00:10:56.320 }, 00:10:56.320 { 00:10:56.320 "name": "BaseBdev2", 00:10:56.320 "uuid": "f403ff65-f601-47ab-bb77-17e35a393deb", 00:10:56.320 "is_configured": true, 00:10:56.320 "data_offset": 0, 00:10:56.320 "data_size": 65536 00:10:56.320 }, 00:10:56.320 { 00:10:56.320 "name": "BaseBdev3", 00:10:56.320 "uuid": "208b333b-91a6-4697-be15-f7ec2bbb032c", 00:10:56.320 "is_configured": true, 00:10:56.320 "data_offset": 0, 00:10:56.320 "data_size": 65536 00:10:56.320 }, 00:10:56.320 { 00:10:56.320 "name": "BaseBdev4", 00:10:56.320 "uuid": "ca43df63-2390-4970-98b7-436313ef9e91", 00:10:56.320 "is_configured": true, 00:10:56.320 "data_offset": 0, 00:10:56.320 "data_size": 65536 00:10:56.320 } 00:10:56.320 ] 00:10:56.320 }' 00:10:56.320 15:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:56.320 15:18:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.888 15:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:56.888 15:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:56.888 15:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:56.888 15:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:56.888 15:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:56.888 15:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:56.888 15:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:56.888 15:18:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.888 15:18:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.888 15:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:56.888 [2024-11-20 15:18:43.175311] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:56.888 15:18:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.888 15:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:56.888 "name": "Existed_Raid", 00:10:56.888 "aliases": [ 00:10:56.888 "074bdc87-6cd1-4038-bb60-3c209a5029ff" 00:10:56.888 ], 00:10:56.888 "product_name": "Raid Volume", 00:10:56.888 "block_size": 512, 00:10:56.888 "num_blocks": 262144, 00:10:56.888 "uuid": "074bdc87-6cd1-4038-bb60-3c209a5029ff", 00:10:56.888 "assigned_rate_limits": { 00:10:56.888 "rw_ios_per_sec": 0, 00:10:56.888 "rw_mbytes_per_sec": 0, 00:10:56.888 "r_mbytes_per_sec": 0, 00:10:56.888 "w_mbytes_per_sec": 0 00:10:56.888 }, 00:10:56.888 "claimed": false, 00:10:56.888 "zoned": false, 00:10:56.888 "supported_io_types": { 00:10:56.888 "read": true, 00:10:56.888 "write": true, 00:10:56.888 "unmap": true, 00:10:56.888 "flush": true, 00:10:56.888 "reset": true, 00:10:56.888 "nvme_admin": false, 00:10:56.888 "nvme_io": false, 00:10:56.888 "nvme_io_md": false, 00:10:56.888 "write_zeroes": true, 00:10:56.888 "zcopy": false, 00:10:56.888 "get_zone_info": false, 00:10:56.888 "zone_management": false, 00:10:56.888 "zone_append": false, 00:10:56.888 "compare": false, 00:10:56.888 "compare_and_write": false, 00:10:56.888 "abort": false, 00:10:56.888 "seek_hole": false, 00:10:56.888 "seek_data": false, 00:10:56.888 "copy": false, 00:10:56.888 "nvme_iov_md": false 00:10:56.888 }, 00:10:56.888 "memory_domains": [ 00:10:56.888 { 00:10:56.888 "dma_device_id": "system", 00:10:56.888 "dma_device_type": 1 00:10:56.888 }, 00:10:56.888 { 00:10:56.888 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:56.888 "dma_device_type": 2 00:10:56.888 }, 00:10:56.888 { 00:10:56.888 "dma_device_id": "system", 00:10:56.888 "dma_device_type": 1 00:10:56.888 }, 00:10:56.888 { 00:10:56.888 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:56.888 "dma_device_type": 2 00:10:56.888 }, 00:10:56.888 { 00:10:56.888 "dma_device_id": "system", 00:10:56.888 "dma_device_type": 1 00:10:56.888 }, 00:10:56.888 { 00:10:56.888 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:56.888 "dma_device_type": 2 00:10:56.888 }, 00:10:56.888 { 00:10:56.888 "dma_device_id": "system", 00:10:56.888 "dma_device_type": 1 00:10:56.888 }, 00:10:56.888 { 00:10:56.888 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:56.888 "dma_device_type": 2 00:10:56.888 } 00:10:56.888 ], 00:10:56.888 "driver_specific": { 00:10:56.888 "raid": { 00:10:56.888 "uuid": "074bdc87-6cd1-4038-bb60-3c209a5029ff", 00:10:56.888 "strip_size_kb": 64, 00:10:56.888 "state": "online", 00:10:56.888 "raid_level": "concat", 00:10:56.888 "superblock": false, 00:10:56.888 "num_base_bdevs": 4, 00:10:56.888 "num_base_bdevs_discovered": 4, 00:10:56.888 "num_base_bdevs_operational": 4, 00:10:56.888 "base_bdevs_list": [ 00:10:56.888 { 00:10:56.888 "name": "NewBaseBdev", 00:10:56.888 "uuid": "028873fc-a99c-41b7-9f9c-4c4acf22d0f0", 00:10:56.888 "is_configured": true, 00:10:56.888 "data_offset": 0, 00:10:56.888 "data_size": 65536 00:10:56.888 }, 00:10:56.888 { 00:10:56.888 "name": "BaseBdev2", 00:10:56.888 "uuid": "f403ff65-f601-47ab-bb77-17e35a393deb", 00:10:56.888 "is_configured": true, 00:10:56.888 "data_offset": 0, 00:10:56.888 "data_size": 65536 00:10:56.888 }, 00:10:56.888 { 00:10:56.888 "name": "BaseBdev3", 00:10:56.888 "uuid": "208b333b-91a6-4697-be15-f7ec2bbb032c", 00:10:56.888 "is_configured": true, 00:10:56.888 "data_offset": 0, 00:10:56.888 "data_size": 65536 00:10:56.888 }, 00:10:56.888 { 00:10:56.888 "name": "BaseBdev4", 00:10:56.888 "uuid": "ca43df63-2390-4970-98b7-436313ef9e91", 00:10:56.888 "is_configured": true, 00:10:56.888 "data_offset": 0, 00:10:56.888 "data_size": 65536 00:10:56.888 } 00:10:56.888 ] 00:10:56.888 } 00:10:56.888 } 00:10:56.888 }' 00:10:56.888 15:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:56.889 15:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:56.889 BaseBdev2 00:10:56.889 BaseBdev3 00:10:56.889 BaseBdev4' 00:10:56.889 15:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:56.889 15:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:56.889 15:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:56.889 15:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:56.889 15:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:56.889 15:18:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.889 15:18:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.889 15:18:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.889 15:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:56.889 15:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:56.889 15:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:56.889 15:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:56.889 15:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:56.889 15:18:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.889 15:18:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.889 15:18:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.148 15:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:57.148 15:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:57.148 15:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:57.148 15:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:57.148 15:18:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.148 15:18:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.148 15:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:57.148 15:18:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.148 15:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:57.148 15:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:57.148 15:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:57.148 15:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:57.148 15:18:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.148 15:18:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.148 15:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:57.148 15:18:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.148 15:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:57.148 15:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:57.148 15:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:57.148 15:18:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.148 15:18:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.148 [2024-11-20 15:18:43.470939] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:57.148 [2024-11-20 15:18:43.470975] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:57.148 [2024-11-20 15:18:43.471058] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:57.148 [2024-11-20 15:18:43.471126] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:57.148 [2024-11-20 15:18:43.471138] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:10:57.148 15:18:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.148 15:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 71112 00:10:57.148 15:18:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 71112 ']' 00:10:57.148 15:18:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 71112 00:10:57.148 15:18:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:10:57.148 15:18:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:57.148 15:18:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71112 00:10:57.148 15:18:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:57.148 killing process with pid 71112 00:10:57.148 15:18:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:57.148 15:18:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71112' 00:10:57.148 15:18:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 71112 00:10:57.148 [2024-11-20 15:18:43.525007] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:57.148 15:18:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 71112 00:10:57.716 [2024-11-20 15:18:43.930067] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:58.652 15:18:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:10:58.652 00:10:58.652 real 0m11.290s 00:10:58.652 user 0m17.963s 00:10:58.652 sys 0m2.204s 00:10:58.652 ************************************ 00:10:58.652 END TEST raid_state_function_test 00:10:58.652 ************************************ 00:10:58.652 15:18:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:58.652 15:18:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.912 15:18:45 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:10:58.912 15:18:45 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:58.912 15:18:45 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:58.912 15:18:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:58.912 ************************************ 00:10:58.912 START TEST raid_state_function_test_sb 00:10:58.912 ************************************ 00:10:58.912 15:18:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 4 true 00:10:58.912 15:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:10:58.912 15:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:10:58.912 15:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:10:58.912 15:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:58.912 15:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:58.912 15:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:58.912 15:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:58.912 15:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:58.912 15:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:58.912 15:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:58.912 15:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:58.912 15:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:58.912 15:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:58.912 15:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:58.912 15:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:58.912 15:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:10:58.912 15:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:58.912 15:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:58.912 15:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:58.912 15:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:58.912 15:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:58.912 15:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:58.912 15:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:58.912 15:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:58.912 15:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:10:58.912 15:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:58.912 15:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:58.912 15:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:10:58.912 15:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:10:58.912 Process raid pid: 71780 00:10:58.912 15:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=71780 00:10:58.912 15:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:58.912 15:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 71780' 00:10:58.912 15:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 71780 00:10:58.912 15:18:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 71780 ']' 00:10:58.912 15:18:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:58.912 15:18:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:58.913 15:18:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:58.913 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:58.913 15:18:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:58.913 15:18:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.913 [2024-11-20 15:18:45.260292] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:10:58.913 [2024-11-20 15:18:45.260424] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:59.172 [2024-11-20 15:18:45.442569] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:59.172 [2024-11-20 15:18:45.565393] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:59.430 [2024-11-20 15:18:45.780733] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:59.430 [2024-11-20 15:18:45.780992] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:59.689 15:18:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:59.689 15:18:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:10:59.689 15:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:59.689 15:18:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.689 15:18:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.689 [2024-11-20 15:18:46.119359] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:59.689 [2024-11-20 15:18:46.119419] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:59.689 [2024-11-20 15:18:46.119432] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:59.689 [2024-11-20 15:18:46.119445] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:59.689 [2024-11-20 15:18:46.119459] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:59.689 [2024-11-20 15:18:46.119472] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:59.689 [2024-11-20 15:18:46.119479] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:59.689 [2024-11-20 15:18:46.119491] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:59.689 15:18:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.689 15:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:59.689 15:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:59.689 15:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:59.689 15:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:59.689 15:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:59.689 15:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:59.689 15:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:59.689 15:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:59.689 15:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:59.689 15:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:59.689 15:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:59.689 15:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:59.689 15:18:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.690 15:18:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.690 15:18:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.948 15:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:59.948 "name": "Existed_Raid", 00:10:59.948 "uuid": "79dfa52f-19f4-466b-b1d0-adf11d626a1d", 00:10:59.948 "strip_size_kb": 64, 00:10:59.948 "state": "configuring", 00:10:59.948 "raid_level": "concat", 00:10:59.948 "superblock": true, 00:10:59.948 "num_base_bdevs": 4, 00:10:59.948 "num_base_bdevs_discovered": 0, 00:10:59.948 "num_base_bdevs_operational": 4, 00:10:59.948 "base_bdevs_list": [ 00:10:59.948 { 00:10:59.948 "name": "BaseBdev1", 00:10:59.948 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:59.948 "is_configured": false, 00:10:59.948 "data_offset": 0, 00:10:59.948 "data_size": 0 00:10:59.948 }, 00:10:59.948 { 00:10:59.948 "name": "BaseBdev2", 00:10:59.948 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:59.948 "is_configured": false, 00:10:59.948 "data_offset": 0, 00:10:59.948 "data_size": 0 00:10:59.948 }, 00:10:59.948 { 00:10:59.948 "name": "BaseBdev3", 00:10:59.948 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:59.948 "is_configured": false, 00:10:59.948 "data_offset": 0, 00:10:59.948 "data_size": 0 00:10:59.948 }, 00:10:59.948 { 00:10:59.948 "name": "BaseBdev4", 00:10:59.948 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:59.948 "is_configured": false, 00:10:59.948 "data_offset": 0, 00:10:59.948 "data_size": 0 00:10:59.948 } 00:10:59.948 ] 00:10:59.948 }' 00:10:59.948 15:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:59.948 15:18:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.207 15:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:00.207 15:18:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.207 15:18:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.207 [2024-11-20 15:18:46.574966] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:00.207 [2024-11-20 15:18:46.575015] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:00.207 15:18:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.207 15:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:00.207 15:18:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.207 15:18:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.207 [2024-11-20 15:18:46.583001] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:00.207 [2024-11-20 15:18:46.583060] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:00.207 [2024-11-20 15:18:46.583069] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:00.207 [2024-11-20 15:18:46.583082] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:00.207 [2024-11-20 15:18:46.583090] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:00.208 [2024-11-20 15:18:46.583102] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:00.208 [2024-11-20 15:18:46.583110] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:00.208 [2024-11-20 15:18:46.583121] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:00.208 15:18:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.208 15:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:00.208 15:18:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.208 15:18:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.208 [2024-11-20 15:18:46.630116] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:00.208 BaseBdev1 00:11:00.208 15:18:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.208 15:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:00.208 15:18:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:00.208 15:18:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:00.208 15:18:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:00.208 15:18:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:00.208 15:18:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:00.208 15:18:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:00.208 15:18:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.208 15:18:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.208 15:18:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.208 15:18:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:00.208 15:18:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.208 15:18:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.208 [ 00:11:00.208 { 00:11:00.208 "name": "BaseBdev1", 00:11:00.208 "aliases": [ 00:11:00.208 "04e70a6b-3165-46b3-8adc-a5726eb70745" 00:11:00.208 ], 00:11:00.208 "product_name": "Malloc disk", 00:11:00.208 "block_size": 512, 00:11:00.208 "num_blocks": 65536, 00:11:00.208 "uuid": "04e70a6b-3165-46b3-8adc-a5726eb70745", 00:11:00.208 "assigned_rate_limits": { 00:11:00.208 "rw_ios_per_sec": 0, 00:11:00.208 "rw_mbytes_per_sec": 0, 00:11:00.208 "r_mbytes_per_sec": 0, 00:11:00.208 "w_mbytes_per_sec": 0 00:11:00.208 }, 00:11:00.208 "claimed": true, 00:11:00.208 "claim_type": "exclusive_write", 00:11:00.208 "zoned": false, 00:11:00.208 "supported_io_types": { 00:11:00.208 "read": true, 00:11:00.208 "write": true, 00:11:00.208 "unmap": true, 00:11:00.208 "flush": true, 00:11:00.208 "reset": true, 00:11:00.208 "nvme_admin": false, 00:11:00.208 "nvme_io": false, 00:11:00.208 "nvme_io_md": false, 00:11:00.208 "write_zeroes": true, 00:11:00.208 "zcopy": true, 00:11:00.208 "get_zone_info": false, 00:11:00.208 "zone_management": false, 00:11:00.208 "zone_append": false, 00:11:00.208 "compare": false, 00:11:00.208 "compare_and_write": false, 00:11:00.208 "abort": true, 00:11:00.208 "seek_hole": false, 00:11:00.208 "seek_data": false, 00:11:00.208 "copy": true, 00:11:00.208 "nvme_iov_md": false 00:11:00.208 }, 00:11:00.208 "memory_domains": [ 00:11:00.208 { 00:11:00.208 "dma_device_id": "system", 00:11:00.208 "dma_device_type": 1 00:11:00.208 }, 00:11:00.208 { 00:11:00.208 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:00.208 "dma_device_type": 2 00:11:00.208 } 00:11:00.208 ], 00:11:00.208 "driver_specific": {} 00:11:00.208 } 00:11:00.208 ] 00:11:00.208 15:18:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.208 15:18:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:00.208 15:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:00.208 15:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:00.208 15:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:00.208 15:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:00.208 15:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:00.208 15:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:00.208 15:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:00.208 15:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:00.208 15:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:00.208 15:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:00.208 15:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:00.208 15:18:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.208 15:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:00.208 15:18:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.466 15:18:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.466 15:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:00.466 "name": "Existed_Raid", 00:11:00.466 "uuid": "521146f4-8982-42b0-9c0b-966b7bbb380c", 00:11:00.466 "strip_size_kb": 64, 00:11:00.466 "state": "configuring", 00:11:00.466 "raid_level": "concat", 00:11:00.466 "superblock": true, 00:11:00.466 "num_base_bdevs": 4, 00:11:00.466 "num_base_bdevs_discovered": 1, 00:11:00.466 "num_base_bdevs_operational": 4, 00:11:00.466 "base_bdevs_list": [ 00:11:00.466 { 00:11:00.466 "name": "BaseBdev1", 00:11:00.466 "uuid": "04e70a6b-3165-46b3-8adc-a5726eb70745", 00:11:00.466 "is_configured": true, 00:11:00.466 "data_offset": 2048, 00:11:00.466 "data_size": 63488 00:11:00.466 }, 00:11:00.466 { 00:11:00.466 "name": "BaseBdev2", 00:11:00.466 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:00.466 "is_configured": false, 00:11:00.466 "data_offset": 0, 00:11:00.466 "data_size": 0 00:11:00.466 }, 00:11:00.466 { 00:11:00.466 "name": "BaseBdev3", 00:11:00.466 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:00.466 "is_configured": false, 00:11:00.466 "data_offset": 0, 00:11:00.466 "data_size": 0 00:11:00.466 }, 00:11:00.466 { 00:11:00.466 "name": "BaseBdev4", 00:11:00.466 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:00.466 "is_configured": false, 00:11:00.466 "data_offset": 0, 00:11:00.466 "data_size": 0 00:11:00.466 } 00:11:00.466 ] 00:11:00.466 }' 00:11:00.466 15:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:00.466 15:18:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.725 15:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:00.725 15:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.725 15:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.725 [2024-11-20 15:18:47.069558] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:00.725 [2024-11-20 15:18:47.069620] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:00.725 15:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.725 15:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:00.725 15:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.725 15:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.725 [2024-11-20 15:18:47.077625] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:00.725 [2024-11-20 15:18:47.079756] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:00.725 [2024-11-20 15:18:47.079801] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:00.725 [2024-11-20 15:18:47.079813] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:00.725 [2024-11-20 15:18:47.079827] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:00.725 [2024-11-20 15:18:47.079836] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:00.725 [2024-11-20 15:18:47.079848] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:00.725 15:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.725 15:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:00.725 15:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:00.725 15:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:00.725 15:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:00.725 15:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:00.725 15:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:00.725 15:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:00.725 15:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:00.725 15:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:00.725 15:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:00.725 15:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:00.725 15:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:00.725 15:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:00.725 15:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:00.725 15:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.725 15:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.725 15:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.725 15:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:00.725 "name": "Existed_Raid", 00:11:00.725 "uuid": "735d41c6-7a79-492a-b825-5d2838d9ba01", 00:11:00.725 "strip_size_kb": 64, 00:11:00.725 "state": "configuring", 00:11:00.725 "raid_level": "concat", 00:11:00.725 "superblock": true, 00:11:00.725 "num_base_bdevs": 4, 00:11:00.725 "num_base_bdevs_discovered": 1, 00:11:00.725 "num_base_bdevs_operational": 4, 00:11:00.726 "base_bdevs_list": [ 00:11:00.726 { 00:11:00.726 "name": "BaseBdev1", 00:11:00.726 "uuid": "04e70a6b-3165-46b3-8adc-a5726eb70745", 00:11:00.726 "is_configured": true, 00:11:00.726 "data_offset": 2048, 00:11:00.726 "data_size": 63488 00:11:00.726 }, 00:11:00.726 { 00:11:00.726 "name": "BaseBdev2", 00:11:00.726 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:00.726 "is_configured": false, 00:11:00.726 "data_offset": 0, 00:11:00.726 "data_size": 0 00:11:00.726 }, 00:11:00.726 { 00:11:00.726 "name": "BaseBdev3", 00:11:00.726 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:00.726 "is_configured": false, 00:11:00.726 "data_offset": 0, 00:11:00.726 "data_size": 0 00:11:00.726 }, 00:11:00.726 { 00:11:00.726 "name": "BaseBdev4", 00:11:00.726 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:00.726 "is_configured": false, 00:11:00.726 "data_offset": 0, 00:11:00.726 "data_size": 0 00:11:00.726 } 00:11:00.726 ] 00:11:00.726 }' 00:11:00.726 15:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:00.726 15:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.293 15:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:01.293 15:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.293 15:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.293 [2024-11-20 15:18:47.553309] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:01.293 BaseBdev2 00:11:01.293 15:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.293 15:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:01.293 15:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:01.293 15:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:01.293 15:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:01.293 15:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:01.293 15:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:01.293 15:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:01.293 15:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.293 15:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.293 15:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.293 15:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:01.293 15:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.293 15:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.293 [ 00:11:01.293 { 00:11:01.293 "name": "BaseBdev2", 00:11:01.293 "aliases": [ 00:11:01.294 "0979d08a-6055-46c3-bb21-1d549b31a6f8" 00:11:01.294 ], 00:11:01.294 "product_name": "Malloc disk", 00:11:01.294 "block_size": 512, 00:11:01.294 "num_blocks": 65536, 00:11:01.294 "uuid": "0979d08a-6055-46c3-bb21-1d549b31a6f8", 00:11:01.294 "assigned_rate_limits": { 00:11:01.294 "rw_ios_per_sec": 0, 00:11:01.294 "rw_mbytes_per_sec": 0, 00:11:01.294 "r_mbytes_per_sec": 0, 00:11:01.294 "w_mbytes_per_sec": 0 00:11:01.294 }, 00:11:01.294 "claimed": true, 00:11:01.294 "claim_type": "exclusive_write", 00:11:01.294 "zoned": false, 00:11:01.294 "supported_io_types": { 00:11:01.294 "read": true, 00:11:01.294 "write": true, 00:11:01.294 "unmap": true, 00:11:01.294 "flush": true, 00:11:01.294 "reset": true, 00:11:01.294 "nvme_admin": false, 00:11:01.294 "nvme_io": false, 00:11:01.294 "nvme_io_md": false, 00:11:01.294 "write_zeroes": true, 00:11:01.294 "zcopy": true, 00:11:01.294 "get_zone_info": false, 00:11:01.294 "zone_management": false, 00:11:01.294 "zone_append": false, 00:11:01.294 "compare": false, 00:11:01.294 "compare_and_write": false, 00:11:01.294 "abort": true, 00:11:01.294 "seek_hole": false, 00:11:01.294 "seek_data": false, 00:11:01.294 "copy": true, 00:11:01.294 "nvme_iov_md": false 00:11:01.294 }, 00:11:01.294 "memory_domains": [ 00:11:01.294 { 00:11:01.294 "dma_device_id": "system", 00:11:01.294 "dma_device_type": 1 00:11:01.294 }, 00:11:01.294 { 00:11:01.294 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:01.294 "dma_device_type": 2 00:11:01.294 } 00:11:01.294 ], 00:11:01.294 "driver_specific": {} 00:11:01.294 } 00:11:01.294 ] 00:11:01.294 15:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.294 15:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:01.294 15:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:01.294 15:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:01.294 15:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:01.294 15:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:01.294 15:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:01.294 15:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:01.294 15:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:01.294 15:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:01.294 15:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:01.294 15:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:01.294 15:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:01.294 15:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:01.294 15:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:01.294 15:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:01.294 15:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.294 15:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.294 15:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.294 15:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:01.294 "name": "Existed_Raid", 00:11:01.294 "uuid": "735d41c6-7a79-492a-b825-5d2838d9ba01", 00:11:01.294 "strip_size_kb": 64, 00:11:01.294 "state": "configuring", 00:11:01.294 "raid_level": "concat", 00:11:01.294 "superblock": true, 00:11:01.294 "num_base_bdevs": 4, 00:11:01.294 "num_base_bdevs_discovered": 2, 00:11:01.294 "num_base_bdevs_operational": 4, 00:11:01.294 "base_bdevs_list": [ 00:11:01.294 { 00:11:01.294 "name": "BaseBdev1", 00:11:01.294 "uuid": "04e70a6b-3165-46b3-8adc-a5726eb70745", 00:11:01.294 "is_configured": true, 00:11:01.294 "data_offset": 2048, 00:11:01.294 "data_size": 63488 00:11:01.294 }, 00:11:01.294 { 00:11:01.294 "name": "BaseBdev2", 00:11:01.294 "uuid": "0979d08a-6055-46c3-bb21-1d549b31a6f8", 00:11:01.294 "is_configured": true, 00:11:01.294 "data_offset": 2048, 00:11:01.294 "data_size": 63488 00:11:01.294 }, 00:11:01.294 { 00:11:01.294 "name": "BaseBdev3", 00:11:01.294 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:01.294 "is_configured": false, 00:11:01.294 "data_offset": 0, 00:11:01.294 "data_size": 0 00:11:01.294 }, 00:11:01.294 { 00:11:01.294 "name": "BaseBdev4", 00:11:01.294 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:01.294 "is_configured": false, 00:11:01.294 "data_offset": 0, 00:11:01.294 "data_size": 0 00:11:01.294 } 00:11:01.294 ] 00:11:01.294 }' 00:11:01.294 15:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:01.294 15:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.553 15:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:01.553 15:18:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.553 15:18:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.812 [2024-11-20 15:18:48.062322] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:01.812 BaseBdev3 00:11:01.812 15:18:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.812 15:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:01.812 15:18:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:01.812 15:18:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:01.812 15:18:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:01.812 15:18:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:01.812 15:18:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:01.812 15:18:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:01.812 15:18:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.812 15:18:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.812 15:18:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.812 15:18:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:01.812 15:18:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.812 15:18:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.812 [ 00:11:01.812 { 00:11:01.812 "name": "BaseBdev3", 00:11:01.812 "aliases": [ 00:11:01.812 "80415608-cfde-46e4-a36e-777a9f867f29" 00:11:01.812 ], 00:11:01.812 "product_name": "Malloc disk", 00:11:01.812 "block_size": 512, 00:11:01.812 "num_blocks": 65536, 00:11:01.812 "uuid": "80415608-cfde-46e4-a36e-777a9f867f29", 00:11:01.812 "assigned_rate_limits": { 00:11:01.812 "rw_ios_per_sec": 0, 00:11:01.812 "rw_mbytes_per_sec": 0, 00:11:01.812 "r_mbytes_per_sec": 0, 00:11:01.812 "w_mbytes_per_sec": 0 00:11:01.812 }, 00:11:01.812 "claimed": true, 00:11:01.812 "claim_type": "exclusive_write", 00:11:01.812 "zoned": false, 00:11:01.812 "supported_io_types": { 00:11:01.812 "read": true, 00:11:01.812 "write": true, 00:11:01.812 "unmap": true, 00:11:01.812 "flush": true, 00:11:01.812 "reset": true, 00:11:01.812 "nvme_admin": false, 00:11:01.812 "nvme_io": false, 00:11:01.812 "nvme_io_md": false, 00:11:01.812 "write_zeroes": true, 00:11:01.812 "zcopy": true, 00:11:01.812 "get_zone_info": false, 00:11:01.812 "zone_management": false, 00:11:01.812 "zone_append": false, 00:11:01.812 "compare": false, 00:11:01.812 "compare_and_write": false, 00:11:01.812 "abort": true, 00:11:01.812 "seek_hole": false, 00:11:01.812 "seek_data": false, 00:11:01.812 "copy": true, 00:11:01.812 "nvme_iov_md": false 00:11:01.812 }, 00:11:01.812 "memory_domains": [ 00:11:01.812 { 00:11:01.812 "dma_device_id": "system", 00:11:01.812 "dma_device_type": 1 00:11:01.812 }, 00:11:01.812 { 00:11:01.812 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:01.812 "dma_device_type": 2 00:11:01.812 } 00:11:01.812 ], 00:11:01.812 "driver_specific": {} 00:11:01.812 } 00:11:01.812 ] 00:11:01.812 15:18:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.812 15:18:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:01.812 15:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:01.812 15:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:01.812 15:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:01.812 15:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:01.812 15:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:01.812 15:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:01.812 15:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:01.812 15:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:01.812 15:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:01.812 15:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:01.812 15:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:01.812 15:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:01.812 15:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:01.812 15:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:01.812 15:18:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.812 15:18:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.812 15:18:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.812 15:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:01.812 "name": "Existed_Raid", 00:11:01.812 "uuid": "735d41c6-7a79-492a-b825-5d2838d9ba01", 00:11:01.812 "strip_size_kb": 64, 00:11:01.812 "state": "configuring", 00:11:01.812 "raid_level": "concat", 00:11:01.812 "superblock": true, 00:11:01.812 "num_base_bdevs": 4, 00:11:01.812 "num_base_bdevs_discovered": 3, 00:11:01.812 "num_base_bdevs_operational": 4, 00:11:01.812 "base_bdevs_list": [ 00:11:01.812 { 00:11:01.812 "name": "BaseBdev1", 00:11:01.812 "uuid": "04e70a6b-3165-46b3-8adc-a5726eb70745", 00:11:01.812 "is_configured": true, 00:11:01.812 "data_offset": 2048, 00:11:01.812 "data_size": 63488 00:11:01.812 }, 00:11:01.812 { 00:11:01.812 "name": "BaseBdev2", 00:11:01.812 "uuid": "0979d08a-6055-46c3-bb21-1d549b31a6f8", 00:11:01.812 "is_configured": true, 00:11:01.813 "data_offset": 2048, 00:11:01.813 "data_size": 63488 00:11:01.813 }, 00:11:01.813 { 00:11:01.813 "name": "BaseBdev3", 00:11:01.813 "uuid": "80415608-cfde-46e4-a36e-777a9f867f29", 00:11:01.813 "is_configured": true, 00:11:01.813 "data_offset": 2048, 00:11:01.813 "data_size": 63488 00:11:01.813 }, 00:11:01.813 { 00:11:01.813 "name": "BaseBdev4", 00:11:01.813 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:01.813 "is_configured": false, 00:11:01.813 "data_offset": 0, 00:11:01.813 "data_size": 0 00:11:01.813 } 00:11:01.813 ] 00:11:01.813 }' 00:11:01.813 15:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:01.813 15:18:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.071 15:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:02.071 15:18:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.071 15:18:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.071 [2024-11-20 15:18:48.540173] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:02.071 [2024-11-20 15:18:48.540455] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:02.071 [2024-11-20 15:18:48.540471] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:02.071 [2024-11-20 15:18:48.540784] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:02.071 [2024-11-20 15:18:48.540939] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:02.071 [2024-11-20 15:18:48.540959] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:02.071 [2024-11-20 15:18:48.541109] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:02.071 BaseBdev4 00:11:02.071 15:18:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.071 15:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:11:02.071 15:18:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:02.071 15:18:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:02.071 15:18:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:02.071 15:18:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:02.071 15:18:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:02.071 15:18:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:02.071 15:18:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.071 15:18:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.330 15:18:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.330 15:18:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:02.330 15:18:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.330 15:18:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.330 [ 00:11:02.330 { 00:11:02.330 "name": "BaseBdev4", 00:11:02.330 "aliases": [ 00:11:02.330 "245a0706-c32d-4d5e-9422-16ae8477d9ce" 00:11:02.330 ], 00:11:02.330 "product_name": "Malloc disk", 00:11:02.330 "block_size": 512, 00:11:02.330 "num_blocks": 65536, 00:11:02.330 "uuid": "245a0706-c32d-4d5e-9422-16ae8477d9ce", 00:11:02.330 "assigned_rate_limits": { 00:11:02.330 "rw_ios_per_sec": 0, 00:11:02.330 "rw_mbytes_per_sec": 0, 00:11:02.330 "r_mbytes_per_sec": 0, 00:11:02.330 "w_mbytes_per_sec": 0 00:11:02.330 }, 00:11:02.330 "claimed": true, 00:11:02.330 "claim_type": "exclusive_write", 00:11:02.330 "zoned": false, 00:11:02.330 "supported_io_types": { 00:11:02.330 "read": true, 00:11:02.330 "write": true, 00:11:02.330 "unmap": true, 00:11:02.330 "flush": true, 00:11:02.330 "reset": true, 00:11:02.330 "nvme_admin": false, 00:11:02.330 "nvme_io": false, 00:11:02.330 "nvme_io_md": false, 00:11:02.330 "write_zeroes": true, 00:11:02.330 "zcopy": true, 00:11:02.330 "get_zone_info": false, 00:11:02.330 "zone_management": false, 00:11:02.330 "zone_append": false, 00:11:02.330 "compare": false, 00:11:02.330 "compare_and_write": false, 00:11:02.330 "abort": true, 00:11:02.330 "seek_hole": false, 00:11:02.330 "seek_data": false, 00:11:02.330 "copy": true, 00:11:02.330 "nvme_iov_md": false 00:11:02.330 }, 00:11:02.330 "memory_domains": [ 00:11:02.330 { 00:11:02.330 "dma_device_id": "system", 00:11:02.330 "dma_device_type": 1 00:11:02.330 }, 00:11:02.330 { 00:11:02.330 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:02.330 "dma_device_type": 2 00:11:02.330 } 00:11:02.330 ], 00:11:02.330 "driver_specific": {} 00:11:02.330 } 00:11:02.330 ] 00:11:02.330 15:18:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.330 15:18:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:02.330 15:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:02.330 15:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:02.330 15:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:11:02.330 15:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:02.330 15:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:02.330 15:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:02.330 15:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:02.330 15:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:02.330 15:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:02.330 15:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:02.330 15:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:02.330 15:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:02.330 15:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:02.330 15:18:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.330 15:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:02.330 15:18:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.330 15:18:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.330 15:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:02.330 "name": "Existed_Raid", 00:11:02.330 "uuid": "735d41c6-7a79-492a-b825-5d2838d9ba01", 00:11:02.330 "strip_size_kb": 64, 00:11:02.330 "state": "online", 00:11:02.330 "raid_level": "concat", 00:11:02.330 "superblock": true, 00:11:02.330 "num_base_bdevs": 4, 00:11:02.330 "num_base_bdevs_discovered": 4, 00:11:02.330 "num_base_bdevs_operational": 4, 00:11:02.330 "base_bdevs_list": [ 00:11:02.330 { 00:11:02.330 "name": "BaseBdev1", 00:11:02.330 "uuid": "04e70a6b-3165-46b3-8adc-a5726eb70745", 00:11:02.330 "is_configured": true, 00:11:02.330 "data_offset": 2048, 00:11:02.330 "data_size": 63488 00:11:02.330 }, 00:11:02.330 { 00:11:02.330 "name": "BaseBdev2", 00:11:02.330 "uuid": "0979d08a-6055-46c3-bb21-1d549b31a6f8", 00:11:02.330 "is_configured": true, 00:11:02.330 "data_offset": 2048, 00:11:02.330 "data_size": 63488 00:11:02.330 }, 00:11:02.330 { 00:11:02.330 "name": "BaseBdev3", 00:11:02.330 "uuid": "80415608-cfde-46e4-a36e-777a9f867f29", 00:11:02.330 "is_configured": true, 00:11:02.330 "data_offset": 2048, 00:11:02.330 "data_size": 63488 00:11:02.330 }, 00:11:02.330 { 00:11:02.330 "name": "BaseBdev4", 00:11:02.330 "uuid": "245a0706-c32d-4d5e-9422-16ae8477d9ce", 00:11:02.330 "is_configured": true, 00:11:02.330 "data_offset": 2048, 00:11:02.330 "data_size": 63488 00:11:02.330 } 00:11:02.330 ] 00:11:02.330 }' 00:11:02.330 15:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:02.330 15:18:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.589 15:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:02.589 15:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:02.589 15:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:02.589 15:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:02.589 15:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:02.589 15:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:02.589 15:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:02.589 15:18:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.589 15:18:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.589 15:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:02.589 [2024-11-20 15:18:49.003994] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:02.589 15:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.589 15:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:02.589 "name": "Existed_Raid", 00:11:02.589 "aliases": [ 00:11:02.589 "735d41c6-7a79-492a-b825-5d2838d9ba01" 00:11:02.589 ], 00:11:02.589 "product_name": "Raid Volume", 00:11:02.589 "block_size": 512, 00:11:02.589 "num_blocks": 253952, 00:11:02.589 "uuid": "735d41c6-7a79-492a-b825-5d2838d9ba01", 00:11:02.589 "assigned_rate_limits": { 00:11:02.589 "rw_ios_per_sec": 0, 00:11:02.589 "rw_mbytes_per_sec": 0, 00:11:02.589 "r_mbytes_per_sec": 0, 00:11:02.589 "w_mbytes_per_sec": 0 00:11:02.589 }, 00:11:02.589 "claimed": false, 00:11:02.589 "zoned": false, 00:11:02.589 "supported_io_types": { 00:11:02.589 "read": true, 00:11:02.589 "write": true, 00:11:02.589 "unmap": true, 00:11:02.589 "flush": true, 00:11:02.589 "reset": true, 00:11:02.589 "nvme_admin": false, 00:11:02.589 "nvme_io": false, 00:11:02.589 "nvme_io_md": false, 00:11:02.589 "write_zeroes": true, 00:11:02.589 "zcopy": false, 00:11:02.589 "get_zone_info": false, 00:11:02.590 "zone_management": false, 00:11:02.590 "zone_append": false, 00:11:02.590 "compare": false, 00:11:02.590 "compare_and_write": false, 00:11:02.590 "abort": false, 00:11:02.590 "seek_hole": false, 00:11:02.590 "seek_data": false, 00:11:02.590 "copy": false, 00:11:02.590 "nvme_iov_md": false 00:11:02.590 }, 00:11:02.590 "memory_domains": [ 00:11:02.590 { 00:11:02.590 "dma_device_id": "system", 00:11:02.590 "dma_device_type": 1 00:11:02.590 }, 00:11:02.590 { 00:11:02.590 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:02.590 "dma_device_type": 2 00:11:02.590 }, 00:11:02.590 { 00:11:02.590 "dma_device_id": "system", 00:11:02.590 "dma_device_type": 1 00:11:02.590 }, 00:11:02.590 { 00:11:02.590 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:02.590 "dma_device_type": 2 00:11:02.590 }, 00:11:02.590 { 00:11:02.590 "dma_device_id": "system", 00:11:02.590 "dma_device_type": 1 00:11:02.590 }, 00:11:02.590 { 00:11:02.590 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:02.590 "dma_device_type": 2 00:11:02.590 }, 00:11:02.590 { 00:11:02.590 "dma_device_id": "system", 00:11:02.590 "dma_device_type": 1 00:11:02.590 }, 00:11:02.590 { 00:11:02.590 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:02.590 "dma_device_type": 2 00:11:02.590 } 00:11:02.590 ], 00:11:02.590 "driver_specific": { 00:11:02.590 "raid": { 00:11:02.590 "uuid": "735d41c6-7a79-492a-b825-5d2838d9ba01", 00:11:02.590 "strip_size_kb": 64, 00:11:02.590 "state": "online", 00:11:02.590 "raid_level": "concat", 00:11:02.590 "superblock": true, 00:11:02.590 "num_base_bdevs": 4, 00:11:02.590 "num_base_bdevs_discovered": 4, 00:11:02.590 "num_base_bdevs_operational": 4, 00:11:02.590 "base_bdevs_list": [ 00:11:02.590 { 00:11:02.590 "name": "BaseBdev1", 00:11:02.590 "uuid": "04e70a6b-3165-46b3-8adc-a5726eb70745", 00:11:02.590 "is_configured": true, 00:11:02.590 "data_offset": 2048, 00:11:02.590 "data_size": 63488 00:11:02.590 }, 00:11:02.590 { 00:11:02.590 "name": "BaseBdev2", 00:11:02.590 "uuid": "0979d08a-6055-46c3-bb21-1d549b31a6f8", 00:11:02.590 "is_configured": true, 00:11:02.590 "data_offset": 2048, 00:11:02.590 "data_size": 63488 00:11:02.590 }, 00:11:02.590 { 00:11:02.590 "name": "BaseBdev3", 00:11:02.590 "uuid": "80415608-cfde-46e4-a36e-777a9f867f29", 00:11:02.590 "is_configured": true, 00:11:02.590 "data_offset": 2048, 00:11:02.590 "data_size": 63488 00:11:02.590 }, 00:11:02.590 { 00:11:02.590 "name": "BaseBdev4", 00:11:02.590 "uuid": "245a0706-c32d-4d5e-9422-16ae8477d9ce", 00:11:02.590 "is_configured": true, 00:11:02.590 "data_offset": 2048, 00:11:02.590 "data_size": 63488 00:11:02.590 } 00:11:02.590 ] 00:11:02.590 } 00:11:02.590 } 00:11:02.590 }' 00:11:02.590 15:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:02.850 15:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:02.850 BaseBdev2 00:11:02.850 BaseBdev3 00:11:02.850 BaseBdev4' 00:11:02.850 15:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:02.850 15:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:02.850 15:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:02.850 15:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:02.850 15:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.850 15:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:02.850 15:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.850 15:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.850 15:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:02.850 15:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:02.850 15:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:02.850 15:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:02.850 15:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:02.850 15:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.850 15:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.850 15:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.850 15:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:02.850 15:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:02.850 15:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:02.850 15:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:02.850 15:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.850 15:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.850 15:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:02.850 15:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.850 15:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:02.850 15:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:02.850 15:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:02.850 15:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:02.850 15:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.850 15:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.850 15:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:02.851 15:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.851 15:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:02.851 15:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:02.851 15:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:02.851 15:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.851 15:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.851 [2024-11-20 15:18:49.319268] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:02.851 [2024-11-20 15:18:49.319309] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:02.851 [2024-11-20 15:18:49.319366] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:03.177 15:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.177 15:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:03.177 15:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:11:03.177 15:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:03.177 15:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:11:03.177 15:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:11:03.177 15:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:11:03.177 15:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:03.177 15:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:11:03.177 15:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:03.177 15:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:03.177 15:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:03.177 15:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:03.177 15:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:03.177 15:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:03.177 15:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:03.177 15:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:03.177 15:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:03.177 15:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.177 15:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.177 15:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.177 15:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:03.177 "name": "Existed_Raid", 00:11:03.177 "uuid": "735d41c6-7a79-492a-b825-5d2838d9ba01", 00:11:03.177 "strip_size_kb": 64, 00:11:03.177 "state": "offline", 00:11:03.177 "raid_level": "concat", 00:11:03.177 "superblock": true, 00:11:03.177 "num_base_bdevs": 4, 00:11:03.177 "num_base_bdevs_discovered": 3, 00:11:03.177 "num_base_bdevs_operational": 3, 00:11:03.177 "base_bdevs_list": [ 00:11:03.177 { 00:11:03.177 "name": null, 00:11:03.177 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:03.177 "is_configured": false, 00:11:03.177 "data_offset": 0, 00:11:03.177 "data_size": 63488 00:11:03.177 }, 00:11:03.177 { 00:11:03.177 "name": "BaseBdev2", 00:11:03.177 "uuid": "0979d08a-6055-46c3-bb21-1d549b31a6f8", 00:11:03.177 "is_configured": true, 00:11:03.177 "data_offset": 2048, 00:11:03.177 "data_size": 63488 00:11:03.177 }, 00:11:03.177 { 00:11:03.177 "name": "BaseBdev3", 00:11:03.177 "uuid": "80415608-cfde-46e4-a36e-777a9f867f29", 00:11:03.177 "is_configured": true, 00:11:03.177 "data_offset": 2048, 00:11:03.177 "data_size": 63488 00:11:03.177 }, 00:11:03.177 { 00:11:03.177 "name": "BaseBdev4", 00:11:03.177 "uuid": "245a0706-c32d-4d5e-9422-16ae8477d9ce", 00:11:03.177 "is_configured": true, 00:11:03.177 "data_offset": 2048, 00:11:03.177 "data_size": 63488 00:11:03.177 } 00:11:03.177 ] 00:11:03.177 }' 00:11:03.177 15:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:03.177 15:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.445 15:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:03.445 15:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:03.445 15:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:03.445 15:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.445 15:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.445 15:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:03.445 15:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.445 15:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:03.445 15:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:03.445 15:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:03.445 15:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.445 15:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.445 [2024-11-20 15:18:49.888085] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:03.704 15:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.704 15:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:03.704 15:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:03.704 15:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:03.704 15:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:03.704 15:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.704 15:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.704 15:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.704 15:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:03.704 15:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:03.704 15:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:03.704 15:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.704 15:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.704 [2024-11-20 15:18:50.035283] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:03.704 15:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.704 15:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:03.704 15:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:03.704 15:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:03.704 15:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:03.704 15:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.704 15:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.704 15:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.704 15:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:03.704 15:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:03.704 15:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:11:03.704 15:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.704 15:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.964 [2024-11-20 15:18:50.186874] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:11:03.964 [2024-11-20 15:18:50.186932] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:03.964 15:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.964 15:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:03.964 15:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:03.964 15:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:03.964 15:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:03.964 15:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.964 15:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.964 15:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.964 15:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:03.964 15:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:03.964 15:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:11:03.964 15:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:03.964 15:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:03.964 15:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:03.964 15:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.964 15:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.964 BaseBdev2 00:11:03.964 15:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.964 15:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:03.964 15:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:03.964 15:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:03.964 15:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:03.964 15:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:03.964 15:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:03.964 15:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:03.964 15:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.964 15:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.964 15:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.964 15:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:03.964 15:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.964 15:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.964 [ 00:11:03.964 { 00:11:03.964 "name": "BaseBdev2", 00:11:03.964 "aliases": [ 00:11:03.964 "5d422fcd-58c7-424b-9e12-4c7ed30ae46f" 00:11:03.964 ], 00:11:03.964 "product_name": "Malloc disk", 00:11:03.964 "block_size": 512, 00:11:03.964 "num_blocks": 65536, 00:11:03.964 "uuid": "5d422fcd-58c7-424b-9e12-4c7ed30ae46f", 00:11:03.964 "assigned_rate_limits": { 00:11:03.964 "rw_ios_per_sec": 0, 00:11:03.964 "rw_mbytes_per_sec": 0, 00:11:03.964 "r_mbytes_per_sec": 0, 00:11:03.964 "w_mbytes_per_sec": 0 00:11:03.964 }, 00:11:03.964 "claimed": false, 00:11:03.964 "zoned": false, 00:11:03.964 "supported_io_types": { 00:11:03.964 "read": true, 00:11:03.964 "write": true, 00:11:03.964 "unmap": true, 00:11:03.964 "flush": true, 00:11:03.964 "reset": true, 00:11:03.964 "nvme_admin": false, 00:11:03.964 "nvme_io": false, 00:11:03.964 "nvme_io_md": false, 00:11:03.964 "write_zeroes": true, 00:11:03.964 "zcopy": true, 00:11:03.964 "get_zone_info": false, 00:11:03.964 "zone_management": false, 00:11:03.964 "zone_append": false, 00:11:03.964 "compare": false, 00:11:03.964 "compare_and_write": false, 00:11:03.964 "abort": true, 00:11:03.964 "seek_hole": false, 00:11:03.964 "seek_data": false, 00:11:03.964 "copy": true, 00:11:03.964 "nvme_iov_md": false 00:11:03.964 }, 00:11:03.964 "memory_domains": [ 00:11:03.964 { 00:11:03.964 "dma_device_id": "system", 00:11:03.964 "dma_device_type": 1 00:11:03.964 }, 00:11:03.964 { 00:11:03.964 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:03.964 "dma_device_type": 2 00:11:03.964 } 00:11:03.964 ], 00:11:03.964 "driver_specific": {} 00:11:03.964 } 00:11:03.964 ] 00:11:03.964 15:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.964 15:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:03.964 15:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:03.964 15:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:03.964 15:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:03.964 15:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.964 15:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:04.224 BaseBdev3 00:11:04.224 15:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.224 15:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:04.224 15:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:04.224 15:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:04.224 15:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:04.224 15:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:04.224 15:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:04.224 15:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:04.224 15:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.224 15:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:04.225 15:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.225 15:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:04.225 15:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.225 15:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:04.225 [ 00:11:04.225 { 00:11:04.225 "name": "BaseBdev3", 00:11:04.225 "aliases": [ 00:11:04.225 "7b3c85cc-bee1-40e9-bc49-54b407f8a700" 00:11:04.225 ], 00:11:04.225 "product_name": "Malloc disk", 00:11:04.225 "block_size": 512, 00:11:04.225 "num_blocks": 65536, 00:11:04.225 "uuid": "7b3c85cc-bee1-40e9-bc49-54b407f8a700", 00:11:04.225 "assigned_rate_limits": { 00:11:04.225 "rw_ios_per_sec": 0, 00:11:04.225 "rw_mbytes_per_sec": 0, 00:11:04.225 "r_mbytes_per_sec": 0, 00:11:04.225 "w_mbytes_per_sec": 0 00:11:04.225 }, 00:11:04.225 "claimed": false, 00:11:04.225 "zoned": false, 00:11:04.225 "supported_io_types": { 00:11:04.225 "read": true, 00:11:04.225 "write": true, 00:11:04.225 "unmap": true, 00:11:04.225 "flush": true, 00:11:04.225 "reset": true, 00:11:04.225 "nvme_admin": false, 00:11:04.225 "nvme_io": false, 00:11:04.225 "nvme_io_md": false, 00:11:04.225 "write_zeroes": true, 00:11:04.225 "zcopy": true, 00:11:04.225 "get_zone_info": false, 00:11:04.225 "zone_management": false, 00:11:04.225 "zone_append": false, 00:11:04.225 "compare": false, 00:11:04.225 "compare_and_write": false, 00:11:04.225 "abort": true, 00:11:04.225 "seek_hole": false, 00:11:04.225 "seek_data": false, 00:11:04.225 "copy": true, 00:11:04.225 "nvme_iov_md": false 00:11:04.225 }, 00:11:04.225 "memory_domains": [ 00:11:04.225 { 00:11:04.225 "dma_device_id": "system", 00:11:04.225 "dma_device_type": 1 00:11:04.225 }, 00:11:04.225 { 00:11:04.225 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:04.225 "dma_device_type": 2 00:11:04.225 } 00:11:04.225 ], 00:11:04.225 "driver_specific": {} 00:11:04.225 } 00:11:04.225 ] 00:11:04.225 15:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.225 15:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:04.225 15:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:04.225 15:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:04.225 15:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:04.225 15:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.225 15:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:04.225 BaseBdev4 00:11:04.225 15:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.225 15:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:11:04.225 15:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:04.225 15:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:04.225 15:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:04.225 15:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:04.225 15:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:04.225 15:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:04.225 15:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.225 15:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:04.225 15:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.225 15:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:04.225 15:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.225 15:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:04.225 [ 00:11:04.225 { 00:11:04.225 "name": "BaseBdev4", 00:11:04.225 "aliases": [ 00:11:04.225 "8ac1187d-0092-4517-a6a0-d8e6172e3f68" 00:11:04.225 ], 00:11:04.225 "product_name": "Malloc disk", 00:11:04.225 "block_size": 512, 00:11:04.225 "num_blocks": 65536, 00:11:04.225 "uuid": "8ac1187d-0092-4517-a6a0-d8e6172e3f68", 00:11:04.225 "assigned_rate_limits": { 00:11:04.225 "rw_ios_per_sec": 0, 00:11:04.225 "rw_mbytes_per_sec": 0, 00:11:04.225 "r_mbytes_per_sec": 0, 00:11:04.225 "w_mbytes_per_sec": 0 00:11:04.225 }, 00:11:04.225 "claimed": false, 00:11:04.225 "zoned": false, 00:11:04.225 "supported_io_types": { 00:11:04.225 "read": true, 00:11:04.225 "write": true, 00:11:04.225 "unmap": true, 00:11:04.225 "flush": true, 00:11:04.225 "reset": true, 00:11:04.225 "nvme_admin": false, 00:11:04.225 "nvme_io": false, 00:11:04.225 "nvme_io_md": false, 00:11:04.225 "write_zeroes": true, 00:11:04.225 "zcopy": true, 00:11:04.225 "get_zone_info": false, 00:11:04.225 "zone_management": false, 00:11:04.225 "zone_append": false, 00:11:04.225 "compare": false, 00:11:04.225 "compare_and_write": false, 00:11:04.225 "abort": true, 00:11:04.225 "seek_hole": false, 00:11:04.225 "seek_data": false, 00:11:04.225 "copy": true, 00:11:04.225 "nvme_iov_md": false 00:11:04.225 }, 00:11:04.225 "memory_domains": [ 00:11:04.225 { 00:11:04.225 "dma_device_id": "system", 00:11:04.225 "dma_device_type": 1 00:11:04.225 }, 00:11:04.225 { 00:11:04.225 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:04.225 "dma_device_type": 2 00:11:04.225 } 00:11:04.225 ], 00:11:04.225 "driver_specific": {} 00:11:04.225 } 00:11:04.225 ] 00:11:04.225 15:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.225 15:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:04.225 15:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:04.225 15:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:04.225 15:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:04.225 15:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.225 15:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:04.225 [2024-11-20 15:18:50.595973] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:04.225 [2024-11-20 15:18:50.596025] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:04.225 [2024-11-20 15:18:50.596056] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:04.225 [2024-11-20 15:18:50.598223] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:04.225 [2024-11-20 15:18:50.598282] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:04.225 15:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.225 15:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:04.225 15:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:04.225 15:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:04.225 15:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:04.225 15:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:04.225 15:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:04.225 15:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:04.225 15:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:04.225 15:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:04.225 15:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:04.225 15:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:04.225 15:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:04.225 15:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.225 15:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:04.225 15:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.225 15:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:04.225 "name": "Existed_Raid", 00:11:04.225 "uuid": "cd5fb17b-e539-4029-bd18-276feed4a061", 00:11:04.225 "strip_size_kb": 64, 00:11:04.225 "state": "configuring", 00:11:04.225 "raid_level": "concat", 00:11:04.225 "superblock": true, 00:11:04.225 "num_base_bdevs": 4, 00:11:04.225 "num_base_bdevs_discovered": 3, 00:11:04.225 "num_base_bdevs_operational": 4, 00:11:04.225 "base_bdevs_list": [ 00:11:04.225 { 00:11:04.225 "name": "BaseBdev1", 00:11:04.225 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:04.225 "is_configured": false, 00:11:04.225 "data_offset": 0, 00:11:04.225 "data_size": 0 00:11:04.225 }, 00:11:04.225 { 00:11:04.225 "name": "BaseBdev2", 00:11:04.225 "uuid": "5d422fcd-58c7-424b-9e12-4c7ed30ae46f", 00:11:04.225 "is_configured": true, 00:11:04.225 "data_offset": 2048, 00:11:04.225 "data_size": 63488 00:11:04.225 }, 00:11:04.225 { 00:11:04.225 "name": "BaseBdev3", 00:11:04.225 "uuid": "7b3c85cc-bee1-40e9-bc49-54b407f8a700", 00:11:04.225 "is_configured": true, 00:11:04.225 "data_offset": 2048, 00:11:04.226 "data_size": 63488 00:11:04.226 }, 00:11:04.226 { 00:11:04.226 "name": "BaseBdev4", 00:11:04.226 "uuid": "8ac1187d-0092-4517-a6a0-d8e6172e3f68", 00:11:04.226 "is_configured": true, 00:11:04.226 "data_offset": 2048, 00:11:04.226 "data_size": 63488 00:11:04.226 } 00:11:04.226 ] 00:11:04.226 }' 00:11:04.226 15:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:04.226 15:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:04.795 15:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:04.795 15:18:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.795 15:18:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:04.795 [2024-11-20 15:18:51.015397] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:04.795 15:18:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.795 15:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:04.795 15:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:04.795 15:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:04.795 15:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:04.795 15:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:04.795 15:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:04.795 15:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:04.795 15:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:04.795 15:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:04.795 15:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:04.795 15:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:04.795 15:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:04.795 15:18:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.795 15:18:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:04.795 15:18:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.795 15:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:04.795 "name": "Existed_Raid", 00:11:04.795 "uuid": "cd5fb17b-e539-4029-bd18-276feed4a061", 00:11:04.795 "strip_size_kb": 64, 00:11:04.795 "state": "configuring", 00:11:04.795 "raid_level": "concat", 00:11:04.795 "superblock": true, 00:11:04.795 "num_base_bdevs": 4, 00:11:04.795 "num_base_bdevs_discovered": 2, 00:11:04.795 "num_base_bdevs_operational": 4, 00:11:04.795 "base_bdevs_list": [ 00:11:04.795 { 00:11:04.795 "name": "BaseBdev1", 00:11:04.795 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:04.795 "is_configured": false, 00:11:04.795 "data_offset": 0, 00:11:04.795 "data_size": 0 00:11:04.795 }, 00:11:04.795 { 00:11:04.795 "name": null, 00:11:04.795 "uuid": "5d422fcd-58c7-424b-9e12-4c7ed30ae46f", 00:11:04.795 "is_configured": false, 00:11:04.795 "data_offset": 0, 00:11:04.795 "data_size": 63488 00:11:04.795 }, 00:11:04.795 { 00:11:04.795 "name": "BaseBdev3", 00:11:04.795 "uuid": "7b3c85cc-bee1-40e9-bc49-54b407f8a700", 00:11:04.795 "is_configured": true, 00:11:04.795 "data_offset": 2048, 00:11:04.795 "data_size": 63488 00:11:04.795 }, 00:11:04.795 { 00:11:04.795 "name": "BaseBdev4", 00:11:04.795 "uuid": "8ac1187d-0092-4517-a6a0-d8e6172e3f68", 00:11:04.795 "is_configured": true, 00:11:04.795 "data_offset": 2048, 00:11:04.795 "data_size": 63488 00:11:04.795 } 00:11:04.795 ] 00:11:04.795 }' 00:11:04.795 15:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:04.795 15:18:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.054 15:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:05.054 15:18:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.054 15:18:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.054 15:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:05.054 15:18:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.054 15:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:05.054 15:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:05.054 15:18:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.054 15:18:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.313 [2024-11-20 15:18:51.545311] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:05.313 BaseBdev1 00:11:05.313 15:18:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.313 15:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:05.313 15:18:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:05.313 15:18:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:05.313 15:18:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:05.313 15:18:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:05.313 15:18:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:05.313 15:18:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:05.313 15:18:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.313 15:18:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.313 15:18:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.313 15:18:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:05.313 15:18:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.313 15:18:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.313 [ 00:11:05.313 { 00:11:05.313 "name": "BaseBdev1", 00:11:05.313 "aliases": [ 00:11:05.313 "300a2922-6c1b-4293-a60c-398c577a9685" 00:11:05.313 ], 00:11:05.313 "product_name": "Malloc disk", 00:11:05.313 "block_size": 512, 00:11:05.313 "num_blocks": 65536, 00:11:05.313 "uuid": "300a2922-6c1b-4293-a60c-398c577a9685", 00:11:05.313 "assigned_rate_limits": { 00:11:05.313 "rw_ios_per_sec": 0, 00:11:05.313 "rw_mbytes_per_sec": 0, 00:11:05.313 "r_mbytes_per_sec": 0, 00:11:05.313 "w_mbytes_per_sec": 0 00:11:05.313 }, 00:11:05.313 "claimed": true, 00:11:05.313 "claim_type": "exclusive_write", 00:11:05.313 "zoned": false, 00:11:05.313 "supported_io_types": { 00:11:05.313 "read": true, 00:11:05.313 "write": true, 00:11:05.313 "unmap": true, 00:11:05.313 "flush": true, 00:11:05.313 "reset": true, 00:11:05.313 "nvme_admin": false, 00:11:05.313 "nvme_io": false, 00:11:05.313 "nvme_io_md": false, 00:11:05.313 "write_zeroes": true, 00:11:05.313 "zcopy": true, 00:11:05.313 "get_zone_info": false, 00:11:05.313 "zone_management": false, 00:11:05.313 "zone_append": false, 00:11:05.313 "compare": false, 00:11:05.313 "compare_and_write": false, 00:11:05.313 "abort": true, 00:11:05.313 "seek_hole": false, 00:11:05.313 "seek_data": false, 00:11:05.313 "copy": true, 00:11:05.313 "nvme_iov_md": false 00:11:05.313 }, 00:11:05.313 "memory_domains": [ 00:11:05.313 { 00:11:05.313 "dma_device_id": "system", 00:11:05.313 "dma_device_type": 1 00:11:05.313 }, 00:11:05.313 { 00:11:05.313 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:05.313 "dma_device_type": 2 00:11:05.313 } 00:11:05.313 ], 00:11:05.313 "driver_specific": {} 00:11:05.313 } 00:11:05.313 ] 00:11:05.313 15:18:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.313 15:18:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:05.313 15:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:05.313 15:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:05.313 15:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:05.313 15:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:05.313 15:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:05.313 15:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:05.313 15:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:05.313 15:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:05.313 15:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:05.313 15:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:05.313 15:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:05.313 15:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:05.313 15:18:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.313 15:18:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.313 15:18:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.313 15:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:05.313 "name": "Existed_Raid", 00:11:05.313 "uuid": "cd5fb17b-e539-4029-bd18-276feed4a061", 00:11:05.313 "strip_size_kb": 64, 00:11:05.313 "state": "configuring", 00:11:05.313 "raid_level": "concat", 00:11:05.313 "superblock": true, 00:11:05.313 "num_base_bdevs": 4, 00:11:05.313 "num_base_bdevs_discovered": 3, 00:11:05.313 "num_base_bdevs_operational": 4, 00:11:05.313 "base_bdevs_list": [ 00:11:05.313 { 00:11:05.313 "name": "BaseBdev1", 00:11:05.313 "uuid": "300a2922-6c1b-4293-a60c-398c577a9685", 00:11:05.313 "is_configured": true, 00:11:05.313 "data_offset": 2048, 00:11:05.313 "data_size": 63488 00:11:05.313 }, 00:11:05.313 { 00:11:05.313 "name": null, 00:11:05.313 "uuid": "5d422fcd-58c7-424b-9e12-4c7ed30ae46f", 00:11:05.313 "is_configured": false, 00:11:05.313 "data_offset": 0, 00:11:05.313 "data_size": 63488 00:11:05.313 }, 00:11:05.313 { 00:11:05.313 "name": "BaseBdev3", 00:11:05.313 "uuid": "7b3c85cc-bee1-40e9-bc49-54b407f8a700", 00:11:05.313 "is_configured": true, 00:11:05.313 "data_offset": 2048, 00:11:05.313 "data_size": 63488 00:11:05.313 }, 00:11:05.313 { 00:11:05.313 "name": "BaseBdev4", 00:11:05.313 "uuid": "8ac1187d-0092-4517-a6a0-d8e6172e3f68", 00:11:05.313 "is_configured": true, 00:11:05.313 "data_offset": 2048, 00:11:05.313 "data_size": 63488 00:11:05.313 } 00:11:05.313 ] 00:11:05.313 }' 00:11:05.313 15:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:05.314 15:18:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.572 15:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:05.572 15:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:05.572 15:18:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.572 15:18:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.572 15:18:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.572 15:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:05.572 15:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:05.572 15:18:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.572 15:18:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.572 [2024-11-20 15:18:52.052747] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:05.831 15:18:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.831 15:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:05.831 15:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:05.831 15:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:05.831 15:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:05.831 15:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:05.831 15:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:05.831 15:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:05.831 15:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:05.831 15:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:05.831 15:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:05.831 15:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:05.831 15:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:05.831 15:18:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.831 15:18:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.831 15:18:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.831 15:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:05.831 "name": "Existed_Raid", 00:11:05.831 "uuid": "cd5fb17b-e539-4029-bd18-276feed4a061", 00:11:05.831 "strip_size_kb": 64, 00:11:05.831 "state": "configuring", 00:11:05.831 "raid_level": "concat", 00:11:05.831 "superblock": true, 00:11:05.832 "num_base_bdevs": 4, 00:11:05.832 "num_base_bdevs_discovered": 2, 00:11:05.832 "num_base_bdevs_operational": 4, 00:11:05.832 "base_bdevs_list": [ 00:11:05.832 { 00:11:05.832 "name": "BaseBdev1", 00:11:05.832 "uuid": "300a2922-6c1b-4293-a60c-398c577a9685", 00:11:05.832 "is_configured": true, 00:11:05.832 "data_offset": 2048, 00:11:05.832 "data_size": 63488 00:11:05.832 }, 00:11:05.832 { 00:11:05.832 "name": null, 00:11:05.832 "uuid": "5d422fcd-58c7-424b-9e12-4c7ed30ae46f", 00:11:05.832 "is_configured": false, 00:11:05.832 "data_offset": 0, 00:11:05.832 "data_size": 63488 00:11:05.832 }, 00:11:05.832 { 00:11:05.832 "name": null, 00:11:05.832 "uuid": "7b3c85cc-bee1-40e9-bc49-54b407f8a700", 00:11:05.832 "is_configured": false, 00:11:05.832 "data_offset": 0, 00:11:05.832 "data_size": 63488 00:11:05.832 }, 00:11:05.832 { 00:11:05.832 "name": "BaseBdev4", 00:11:05.832 "uuid": "8ac1187d-0092-4517-a6a0-d8e6172e3f68", 00:11:05.832 "is_configured": true, 00:11:05.832 "data_offset": 2048, 00:11:05.832 "data_size": 63488 00:11:05.832 } 00:11:05.832 ] 00:11:05.832 }' 00:11:05.832 15:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:05.832 15:18:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.091 15:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:06.091 15:18:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.091 15:18:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.091 15:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:06.091 15:18:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.091 15:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:06.091 15:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:06.091 15:18:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.091 15:18:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.091 [2024-11-20 15:18:52.559977] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:06.091 15:18:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.091 15:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:06.091 15:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:06.091 15:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:06.091 15:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:06.091 15:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:06.091 15:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:06.091 15:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:06.091 15:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:06.091 15:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:06.091 15:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:06.091 15:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:06.091 15:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:06.091 15:18:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.351 15:18:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.351 15:18:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.351 15:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:06.351 "name": "Existed_Raid", 00:11:06.351 "uuid": "cd5fb17b-e539-4029-bd18-276feed4a061", 00:11:06.351 "strip_size_kb": 64, 00:11:06.351 "state": "configuring", 00:11:06.351 "raid_level": "concat", 00:11:06.351 "superblock": true, 00:11:06.351 "num_base_bdevs": 4, 00:11:06.351 "num_base_bdevs_discovered": 3, 00:11:06.351 "num_base_bdevs_operational": 4, 00:11:06.351 "base_bdevs_list": [ 00:11:06.351 { 00:11:06.351 "name": "BaseBdev1", 00:11:06.351 "uuid": "300a2922-6c1b-4293-a60c-398c577a9685", 00:11:06.351 "is_configured": true, 00:11:06.351 "data_offset": 2048, 00:11:06.351 "data_size": 63488 00:11:06.351 }, 00:11:06.351 { 00:11:06.351 "name": null, 00:11:06.351 "uuid": "5d422fcd-58c7-424b-9e12-4c7ed30ae46f", 00:11:06.351 "is_configured": false, 00:11:06.351 "data_offset": 0, 00:11:06.351 "data_size": 63488 00:11:06.351 }, 00:11:06.351 { 00:11:06.351 "name": "BaseBdev3", 00:11:06.351 "uuid": "7b3c85cc-bee1-40e9-bc49-54b407f8a700", 00:11:06.351 "is_configured": true, 00:11:06.351 "data_offset": 2048, 00:11:06.351 "data_size": 63488 00:11:06.351 }, 00:11:06.351 { 00:11:06.351 "name": "BaseBdev4", 00:11:06.352 "uuid": "8ac1187d-0092-4517-a6a0-d8e6172e3f68", 00:11:06.352 "is_configured": true, 00:11:06.352 "data_offset": 2048, 00:11:06.352 "data_size": 63488 00:11:06.352 } 00:11:06.352 ] 00:11:06.352 }' 00:11:06.352 15:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:06.352 15:18:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.610 15:18:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:06.610 15:18:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:06.610 15:18:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.610 15:18:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.610 15:18:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.610 15:18:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:06.610 15:18:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:06.610 15:18:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.610 15:18:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.610 [2024-11-20 15:18:53.055419] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:06.870 15:18:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.870 15:18:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:06.870 15:18:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:06.870 15:18:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:06.870 15:18:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:06.870 15:18:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:06.870 15:18:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:06.870 15:18:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:06.870 15:18:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:06.870 15:18:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:06.870 15:18:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:06.870 15:18:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:06.870 15:18:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:06.870 15:18:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.870 15:18:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.870 15:18:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.870 15:18:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:06.870 "name": "Existed_Raid", 00:11:06.870 "uuid": "cd5fb17b-e539-4029-bd18-276feed4a061", 00:11:06.870 "strip_size_kb": 64, 00:11:06.870 "state": "configuring", 00:11:06.870 "raid_level": "concat", 00:11:06.870 "superblock": true, 00:11:06.870 "num_base_bdevs": 4, 00:11:06.870 "num_base_bdevs_discovered": 2, 00:11:06.870 "num_base_bdevs_operational": 4, 00:11:06.870 "base_bdevs_list": [ 00:11:06.870 { 00:11:06.870 "name": null, 00:11:06.870 "uuid": "300a2922-6c1b-4293-a60c-398c577a9685", 00:11:06.870 "is_configured": false, 00:11:06.870 "data_offset": 0, 00:11:06.870 "data_size": 63488 00:11:06.870 }, 00:11:06.870 { 00:11:06.870 "name": null, 00:11:06.870 "uuid": "5d422fcd-58c7-424b-9e12-4c7ed30ae46f", 00:11:06.870 "is_configured": false, 00:11:06.870 "data_offset": 0, 00:11:06.870 "data_size": 63488 00:11:06.870 }, 00:11:06.870 { 00:11:06.870 "name": "BaseBdev3", 00:11:06.870 "uuid": "7b3c85cc-bee1-40e9-bc49-54b407f8a700", 00:11:06.870 "is_configured": true, 00:11:06.870 "data_offset": 2048, 00:11:06.870 "data_size": 63488 00:11:06.870 }, 00:11:06.870 { 00:11:06.870 "name": "BaseBdev4", 00:11:06.870 "uuid": "8ac1187d-0092-4517-a6a0-d8e6172e3f68", 00:11:06.870 "is_configured": true, 00:11:06.870 "data_offset": 2048, 00:11:06.870 "data_size": 63488 00:11:06.870 } 00:11:06.870 ] 00:11:06.870 }' 00:11:06.870 15:18:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:06.870 15:18:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:07.129 15:18:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:07.129 15:18:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.129 15:18:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:07.129 15:18:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:07.129 15:18:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.129 15:18:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:07.129 15:18:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:07.129 15:18:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.129 15:18:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:07.129 [2024-11-20 15:18:53.595923] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:07.129 15:18:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.129 15:18:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:07.129 15:18:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:07.129 15:18:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:07.129 15:18:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:07.129 15:18:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:07.129 15:18:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:07.129 15:18:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:07.129 15:18:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:07.129 15:18:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:07.129 15:18:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:07.129 15:18:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:07.129 15:18:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.129 15:18:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:07.129 15:18:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:07.413 15:18:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.413 15:18:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:07.413 "name": "Existed_Raid", 00:11:07.413 "uuid": "cd5fb17b-e539-4029-bd18-276feed4a061", 00:11:07.413 "strip_size_kb": 64, 00:11:07.413 "state": "configuring", 00:11:07.413 "raid_level": "concat", 00:11:07.414 "superblock": true, 00:11:07.414 "num_base_bdevs": 4, 00:11:07.414 "num_base_bdevs_discovered": 3, 00:11:07.414 "num_base_bdevs_operational": 4, 00:11:07.414 "base_bdevs_list": [ 00:11:07.414 { 00:11:07.414 "name": null, 00:11:07.414 "uuid": "300a2922-6c1b-4293-a60c-398c577a9685", 00:11:07.414 "is_configured": false, 00:11:07.414 "data_offset": 0, 00:11:07.414 "data_size": 63488 00:11:07.414 }, 00:11:07.414 { 00:11:07.414 "name": "BaseBdev2", 00:11:07.414 "uuid": "5d422fcd-58c7-424b-9e12-4c7ed30ae46f", 00:11:07.414 "is_configured": true, 00:11:07.414 "data_offset": 2048, 00:11:07.414 "data_size": 63488 00:11:07.414 }, 00:11:07.414 { 00:11:07.414 "name": "BaseBdev3", 00:11:07.414 "uuid": "7b3c85cc-bee1-40e9-bc49-54b407f8a700", 00:11:07.414 "is_configured": true, 00:11:07.414 "data_offset": 2048, 00:11:07.414 "data_size": 63488 00:11:07.414 }, 00:11:07.414 { 00:11:07.414 "name": "BaseBdev4", 00:11:07.414 "uuid": "8ac1187d-0092-4517-a6a0-d8e6172e3f68", 00:11:07.414 "is_configured": true, 00:11:07.414 "data_offset": 2048, 00:11:07.414 "data_size": 63488 00:11:07.414 } 00:11:07.414 ] 00:11:07.414 }' 00:11:07.414 15:18:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:07.414 15:18:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:07.755 15:18:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:07.755 15:18:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:07.755 15:18:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.755 15:18:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:07.755 15:18:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.755 15:18:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:07.755 15:18:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:07.755 15:18:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:07.755 15:18:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.755 15:18:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:07.755 15:18:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.755 15:18:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 300a2922-6c1b-4293-a60c-398c577a9685 00:11:07.755 15:18:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.755 15:18:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:07.755 [2024-11-20 15:18:54.186422] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:07.755 [2024-11-20 15:18:54.186710] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:07.755 [2024-11-20 15:18:54.186726] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:07.755 [2024-11-20 15:18:54.187030] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:11:07.755 [2024-11-20 15:18:54.187165] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:07.755 [2024-11-20 15:18:54.187179] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:11:07.755 [2024-11-20 15:18:54.187311] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:07.755 NewBaseBdev 00:11:07.755 15:18:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.755 15:18:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:07.755 15:18:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:11:07.755 15:18:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:07.755 15:18:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:07.755 15:18:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:07.755 15:18:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:07.755 15:18:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:07.755 15:18:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.755 15:18:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:07.755 15:18:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.755 15:18:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:07.755 15:18:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.755 15:18:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:07.755 [ 00:11:07.755 { 00:11:07.755 "name": "NewBaseBdev", 00:11:07.755 "aliases": [ 00:11:07.755 "300a2922-6c1b-4293-a60c-398c577a9685" 00:11:07.755 ], 00:11:07.755 "product_name": "Malloc disk", 00:11:07.755 "block_size": 512, 00:11:07.755 "num_blocks": 65536, 00:11:07.755 "uuid": "300a2922-6c1b-4293-a60c-398c577a9685", 00:11:07.755 "assigned_rate_limits": { 00:11:07.755 "rw_ios_per_sec": 0, 00:11:07.755 "rw_mbytes_per_sec": 0, 00:11:07.755 "r_mbytes_per_sec": 0, 00:11:07.755 "w_mbytes_per_sec": 0 00:11:07.755 }, 00:11:07.755 "claimed": true, 00:11:07.755 "claim_type": "exclusive_write", 00:11:07.755 "zoned": false, 00:11:07.755 "supported_io_types": { 00:11:07.755 "read": true, 00:11:07.755 "write": true, 00:11:07.755 "unmap": true, 00:11:07.755 "flush": true, 00:11:07.755 "reset": true, 00:11:07.755 "nvme_admin": false, 00:11:07.755 "nvme_io": false, 00:11:07.755 "nvme_io_md": false, 00:11:07.755 "write_zeroes": true, 00:11:07.755 "zcopy": true, 00:11:07.755 "get_zone_info": false, 00:11:07.755 "zone_management": false, 00:11:07.755 "zone_append": false, 00:11:07.755 "compare": false, 00:11:07.755 "compare_and_write": false, 00:11:07.755 "abort": true, 00:11:07.755 "seek_hole": false, 00:11:07.755 "seek_data": false, 00:11:07.755 "copy": true, 00:11:07.755 "nvme_iov_md": false 00:11:07.755 }, 00:11:07.755 "memory_domains": [ 00:11:07.755 { 00:11:07.755 "dma_device_id": "system", 00:11:07.755 "dma_device_type": 1 00:11:07.755 }, 00:11:07.755 { 00:11:07.755 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:07.755 "dma_device_type": 2 00:11:07.755 } 00:11:07.755 ], 00:11:07.755 "driver_specific": {} 00:11:07.755 } 00:11:07.755 ] 00:11:07.755 15:18:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.755 15:18:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:07.755 15:18:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:11:07.755 15:18:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:07.755 15:18:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:07.755 15:18:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:07.755 15:18:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:07.755 15:18:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:07.755 15:18:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:07.755 15:18:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:07.755 15:18:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:07.755 15:18:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:08.014 15:18:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:08.014 15:18:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.014 15:18:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:08.014 15:18:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.014 15:18:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.014 15:18:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:08.014 "name": "Existed_Raid", 00:11:08.014 "uuid": "cd5fb17b-e539-4029-bd18-276feed4a061", 00:11:08.014 "strip_size_kb": 64, 00:11:08.014 "state": "online", 00:11:08.014 "raid_level": "concat", 00:11:08.014 "superblock": true, 00:11:08.014 "num_base_bdevs": 4, 00:11:08.014 "num_base_bdevs_discovered": 4, 00:11:08.014 "num_base_bdevs_operational": 4, 00:11:08.014 "base_bdevs_list": [ 00:11:08.014 { 00:11:08.014 "name": "NewBaseBdev", 00:11:08.014 "uuid": "300a2922-6c1b-4293-a60c-398c577a9685", 00:11:08.014 "is_configured": true, 00:11:08.014 "data_offset": 2048, 00:11:08.014 "data_size": 63488 00:11:08.014 }, 00:11:08.014 { 00:11:08.014 "name": "BaseBdev2", 00:11:08.014 "uuid": "5d422fcd-58c7-424b-9e12-4c7ed30ae46f", 00:11:08.014 "is_configured": true, 00:11:08.014 "data_offset": 2048, 00:11:08.014 "data_size": 63488 00:11:08.014 }, 00:11:08.014 { 00:11:08.014 "name": "BaseBdev3", 00:11:08.014 "uuid": "7b3c85cc-bee1-40e9-bc49-54b407f8a700", 00:11:08.014 "is_configured": true, 00:11:08.014 "data_offset": 2048, 00:11:08.014 "data_size": 63488 00:11:08.014 }, 00:11:08.014 { 00:11:08.014 "name": "BaseBdev4", 00:11:08.014 "uuid": "8ac1187d-0092-4517-a6a0-d8e6172e3f68", 00:11:08.014 "is_configured": true, 00:11:08.014 "data_offset": 2048, 00:11:08.014 "data_size": 63488 00:11:08.014 } 00:11:08.014 ] 00:11:08.014 }' 00:11:08.014 15:18:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:08.014 15:18:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.274 15:18:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:08.274 15:18:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:08.274 15:18:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:08.274 15:18:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:08.274 15:18:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:08.274 15:18:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:08.274 15:18:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:08.274 15:18:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:08.274 15:18:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.274 15:18:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.274 [2024-11-20 15:18:54.702155] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:08.274 15:18:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.274 15:18:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:08.274 "name": "Existed_Raid", 00:11:08.274 "aliases": [ 00:11:08.274 "cd5fb17b-e539-4029-bd18-276feed4a061" 00:11:08.274 ], 00:11:08.274 "product_name": "Raid Volume", 00:11:08.274 "block_size": 512, 00:11:08.274 "num_blocks": 253952, 00:11:08.274 "uuid": "cd5fb17b-e539-4029-bd18-276feed4a061", 00:11:08.274 "assigned_rate_limits": { 00:11:08.274 "rw_ios_per_sec": 0, 00:11:08.274 "rw_mbytes_per_sec": 0, 00:11:08.274 "r_mbytes_per_sec": 0, 00:11:08.274 "w_mbytes_per_sec": 0 00:11:08.274 }, 00:11:08.274 "claimed": false, 00:11:08.274 "zoned": false, 00:11:08.274 "supported_io_types": { 00:11:08.275 "read": true, 00:11:08.275 "write": true, 00:11:08.275 "unmap": true, 00:11:08.275 "flush": true, 00:11:08.275 "reset": true, 00:11:08.275 "nvme_admin": false, 00:11:08.275 "nvme_io": false, 00:11:08.275 "nvme_io_md": false, 00:11:08.275 "write_zeroes": true, 00:11:08.275 "zcopy": false, 00:11:08.275 "get_zone_info": false, 00:11:08.275 "zone_management": false, 00:11:08.275 "zone_append": false, 00:11:08.275 "compare": false, 00:11:08.275 "compare_and_write": false, 00:11:08.275 "abort": false, 00:11:08.275 "seek_hole": false, 00:11:08.275 "seek_data": false, 00:11:08.275 "copy": false, 00:11:08.275 "nvme_iov_md": false 00:11:08.275 }, 00:11:08.275 "memory_domains": [ 00:11:08.275 { 00:11:08.275 "dma_device_id": "system", 00:11:08.275 "dma_device_type": 1 00:11:08.275 }, 00:11:08.275 { 00:11:08.275 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:08.275 "dma_device_type": 2 00:11:08.275 }, 00:11:08.275 { 00:11:08.275 "dma_device_id": "system", 00:11:08.275 "dma_device_type": 1 00:11:08.275 }, 00:11:08.275 { 00:11:08.275 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:08.275 "dma_device_type": 2 00:11:08.275 }, 00:11:08.275 { 00:11:08.275 "dma_device_id": "system", 00:11:08.275 "dma_device_type": 1 00:11:08.275 }, 00:11:08.275 { 00:11:08.275 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:08.275 "dma_device_type": 2 00:11:08.275 }, 00:11:08.275 { 00:11:08.275 "dma_device_id": "system", 00:11:08.275 "dma_device_type": 1 00:11:08.275 }, 00:11:08.275 { 00:11:08.275 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:08.275 "dma_device_type": 2 00:11:08.275 } 00:11:08.275 ], 00:11:08.275 "driver_specific": { 00:11:08.275 "raid": { 00:11:08.275 "uuid": "cd5fb17b-e539-4029-bd18-276feed4a061", 00:11:08.275 "strip_size_kb": 64, 00:11:08.275 "state": "online", 00:11:08.275 "raid_level": "concat", 00:11:08.275 "superblock": true, 00:11:08.275 "num_base_bdevs": 4, 00:11:08.275 "num_base_bdevs_discovered": 4, 00:11:08.275 "num_base_bdevs_operational": 4, 00:11:08.275 "base_bdevs_list": [ 00:11:08.275 { 00:11:08.275 "name": "NewBaseBdev", 00:11:08.275 "uuid": "300a2922-6c1b-4293-a60c-398c577a9685", 00:11:08.275 "is_configured": true, 00:11:08.275 "data_offset": 2048, 00:11:08.275 "data_size": 63488 00:11:08.275 }, 00:11:08.275 { 00:11:08.275 "name": "BaseBdev2", 00:11:08.275 "uuid": "5d422fcd-58c7-424b-9e12-4c7ed30ae46f", 00:11:08.275 "is_configured": true, 00:11:08.275 "data_offset": 2048, 00:11:08.275 "data_size": 63488 00:11:08.275 }, 00:11:08.275 { 00:11:08.275 "name": "BaseBdev3", 00:11:08.275 "uuid": "7b3c85cc-bee1-40e9-bc49-54b407f8a700", 00:11:08.275 "is_configured": true, 00:11:08.275 "data_offset": 2048, 00:11:08.275 "data_size": 63488 00:11:08.275 }, 00:11:08.275 { 00:11:08.275 "name": "BaseBdev4", 00:11:08.275 "uuid": "8ac1187d-0092-4517-a6a0-d8e6172e3f68", 00:11:08.275 "is_configured": true, 00:11:08.275 "data_offset": 2048, 00:11:08.275 "data_size": 63488 00:11:08.275 } 00:11:08.275 ] 00:11:08.275 } 00:11:08.275 } 00:11:08.275 }' 00:11:08.275 15:18:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:08.537 15:18:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:08.537 BaseBdev2 00:11:08.537 BaseBdev3 00:11:08.537 BaseBdev4' 00:11:08.537 15:18:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:08.537 15:18:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:08.537 15:18:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:08.537 15:18:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:08.537 15:18:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:08.537 15:18:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.537 15:18:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.537 15:18:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.537 15:18:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:08.537 15:18:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:08.537 15:18:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:08.537 15:18:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:08.537 15:18:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.537 15:18:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.537 15:18:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:08.537 15:18:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.537 15:18:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:08.537 15:18:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:08.537 15:18:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:08.537 15:18:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:08.537 15:18:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:08.537 15:18:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.537 15:18:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.537 15:18:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.537 15:18:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:08.537 15:18:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:08.537 15:18:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:08.537 15:18:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:08.537 15:18:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.537 15:18:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.537 15:18:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:08.537 15:18:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.537 15:18:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:08.537 15:18:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:08.537 15:18:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:08.537 15:18:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.537 15:18:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.537 [2024-11-20 15:18:54.953453] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:08.537 [2024-11-20 15:18:54.953492] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:08.537 [2024-11-20 15:18:54.953576] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:08.537 [2024-11-20 15:18:54.953648] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:08.537 [2024-11-20 15:18:54.953690] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:11:08.537 15:18:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.537 15:18:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 71780 00:11:08.537 15:18:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 71780 ']' 00:11:08.537 15:18:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 71780 00:11:08.537 15:18:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:11:08.537 15:18:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:08.537 15:18:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71780 00:11:08.537 killing process with pid 71780 00:11:08.537 15:18:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:08.537 15:18:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:08.537 15:18:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71780' 00:11:08.537 15:18:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 71780 00:11:08.537 [2024-11-20 15:18:55.002479] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:08.537 15:18:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 71780 00:11:09.107 [2024-11-20 15:18:55.419872] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:10.485 15:18:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:11:10.485 00:11:10.485 real 0m11.463s 00:11:10.485 user 0m18.173s 00:11:10.485 sys 0m2.286s 00:11:10.485 15:18:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:10.485 15:18:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:10.485 ************************************ 00:11:10.485 END TEST raid_state_function_test_sb 00:11:10.485 ************************************ 00:11:10.485 15:18:56 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:11:10.485 15:18:56 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:10.485 15:18:56 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:10.485 15:18:56 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:10.485 ************************************ 00:11:10.485 START TEST raid_superblock_test 00:11:10.485 ************************************ 00:11:10.485 15:18:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 4 00:11:10.485 15:18:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:11:10.485 15:18:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:11:10.485 15:18:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:11:10.485 15:18:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:11:10.485 15:18:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:11:10.485 15:18:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:11:10.485 15:18:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:11:10.485 15:18:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:11:10.485 15:18:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:11:10.485 15:18:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:11:10.485 15:18:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:11:10.485 15:18:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:11:10.485 15:18:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:11:10.485 15:18:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:11:10.485 15:18:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:11:10.485 15:18:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:11:10.485 15:18:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=72451 00:11:10.485 15:18:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:11:10.485 15:18:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 72451 00:11:10.485 15:18:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 72451 ']' 00:11:10.485 15:18:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:10.485 15:18:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:10.485 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:10.485 15:18:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:10.485 15:18:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:10.485 15:18:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.485 [2024-11-20 15:18:56.802053] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:11:10.486 [2024-11-20 15:18:56.802237] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72451 ] 00:11:10.744 [2024-11-20 15:18:57.000608] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:10.744 [2024-11-20 15:18:57.126477] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:11.003 [2024-11-20 15:18:57.337882] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:11.003 [2024-11-20 15:18:57.337932] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:11.261 15:18:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:11.261 15:18:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:11:11.261 15:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:11:11.261 15:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:11.261 15:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:11:11.261 15:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:11:11.261 15:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:11:11.261 15:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:11.261 15:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:11.261 15:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:11.261 15:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:11:11.261 15:18:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.261 15:18:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.261 malloc1 00:11:11.261 15:18:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.261 15:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:11.261 15:18:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.261 15:18:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.261 [2024-11-20 15:18:57.722014] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:11.261 [2024-11-20 15:18:57.722085] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:11.261 [2024-11-20 15:18:57.722112] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:11.261 [2024-11-20 15:18:57.722125] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:11.261 [2024-11-20 15:18:57.724776] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:11.261 [2024-11-20 15:18:57.724818] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:11.261 pt1 00:11:11.261 15:18:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.261 15:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:11.261 15:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:11.261 15:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:11:11.261 15:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:11:11.261 15:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:11:11.261 15:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:11.261 15:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:11.261 15:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:11.261 15:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:11:11.261 15:18:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.261 15:18:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.521 malloc2 00:11:11.521 15:18:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.521 15:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:11.521 15:18:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.521 15:18:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.521 [2024-11-20 15:18:57.779518] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:11.521 [2024-11-20 15:18:57.779593] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:11.521 [2024-11-20 15:18:57.779627] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:11.521 [2024-11-20 15:18:57.779640] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:11.521 [2024-11-20 15:18:57.782113] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:11.521 [2024-11-20 15:18:57.782154] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:11.521 pt2 00:11:11.521 15:18:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.521 15:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:11.521 15:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:11.521 15:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:11:11.521 15:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:11:11.521 15:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:11:11.521 15:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:11.521 15:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:11.521 15:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:11.521 15:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:11:11.521 15:18:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.521 15:18:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.521 malloc3 00:11:11.521 15:18:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.521 15:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:11.521 15:18:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.521 15:18:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.521 [2024-11-20 15:18:57.849603] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:11.521 [2024-11-20 15:18:57.849692] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:11.521 [2024-11-20 15:18:57.849720] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:11:11.521 [2024-11-20 15:18:57.849733] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:11.521 [2024-11-20 15:18:57.852269] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:11.521 [2024-11-20 15:18:57.852312] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:11.521 pt3 00:11:11.521 15:18:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.521 15:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:11.521 15:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:11.521 15:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:11:11.521 15:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:11:11.521 15:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:11:11.521 15:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:11.521 15:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:11.521 15:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:11.521 15:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:11:11.521 15:18:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.521 15:18:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.521 malloc4 00:11:11.521 15:18:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.522 15:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:11.522 15:18:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.522 15:18:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.522 [2024-11-20 15:18:57.906883] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:11.522 [2024-11-20 15:18:57.906955] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:11.522 [2024-11-20 15:18:57.906982] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:11:11.522 [2024-11-20 15:18:57.906994] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:11.522 [2024-11-20 15:18:57.909442] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:11.522 [2024-11-20 15:18:57.909485] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:11.522 pt4 00:11:11.522 15:18:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.522 15:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:11.522 15:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:11.522 15:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:11:11.522 15:18:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.522 15:18:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.522 [2024-11-20 15:18:57.918930] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:11.522 [2024-11-20 15:18:57.921076] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:11.522 [2024-11-20 15:18:57.921174] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:11.522 [2024-11-20 15:18:57.921219] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:11.522 [2024-11-20 15:18:57.921405] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:11:11.522 [2024-11-20 15:18:57.921424] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:11.522 [2024-11-20 15:18:57.921770] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:11.522 [2024-11-20 15:18:57.921947] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:11:11.522 [2024-11-20 15:18:57.921968] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:11:11.522 [2024-11-20 15:18:57.922143] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:11.522 15:18:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.522 15:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:11.522 15:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:11.522 15:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:11.522 15:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:11.522 15:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:11.522 15:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:11.522 15:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:11.522 15:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:11.522 15:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:11.522 15:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:11.522 15:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:11.522 15:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:11.522 15:18:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.522 15:18:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.522 15:18:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.522 15:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:11.522 "name": "raid_bdev1", 00:11:11.522 "uuid": "6c06f080-0380-4f6a-834e-6fe34584eea9", 00:11:11.522 "strip_size_kb": 64, 00:11:11.522 "state": "online", 00:11:11.522 "raid_level": "concat", 00:11:11.522 "superblock": true, 00:11:11.522 "num_base_bdevs": 4, 00:11:11.522 "num_base_bdevs_discovered": 4, 00:11:11.522 "num_base_bdevs_operational": 4, 00:11:11.522 "base_bdevs_list": [ 00:11:11.522 { 00:11:11.522 "name": "pt1", 00:11:11.522 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:11.522 "is_configured": true, 00:11:11.522 "data_offset": 2048, 00:11:11.522 "data_size": 63488 00:11:11.522 }, 00:11:11.522 { 00:11:11.522 "name": "pt2", 00:11:11.522 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:11.522 "is_configured": true, 00:11:11.522 "data_offset": 2048, 00:11:11.522 "data_size": 63488 00:11:11.522 }, 00:11:11.522 { 00:11:11.522 "name": "pt3", 00:11:11.522 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:11.522 "is_configured": true, 00:11:11.522 "data_offset": 2048, 00:11:11.522 "data_size": 63488 00:11:11.522 }, 00:11:11.522 { 00:11:11.522 "name": "pt4", 00:11:11.522 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:11.522 "is_configured": true, 00:11:11.522 "data_offset": 2048, 00:11:11.522 "data_size": 63488 00:11:11.522 } 00:11:11.522 ] 00:11:11.522 }' 00:11:11.522 15:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:11.522 15:18:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.092 15:18:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:11:12.092 15:18:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:12.092 15:18:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:12.092 15:18:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:12.092 15:18:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:12.092 15:18:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:12.092 15:18:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:12.092 15:18:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:12.092 15:18:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.092 15:18:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.092 [2024-11-20 15:18:58.390547] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:12.092 15:18:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.092 15:18:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:12.092 "name": "raid_bdev1", 00:11:12.092 "aliases": [ 00:11:12.092 "6c06f080-0380-4f6a-834e-6fe34584eea9" 00:11:12.092 ], 00:11:12.092 "product_name": "Raid Volume", 00:11:12.092 "block_size": 512, 00:11:12.092 "num_blocks": 253952, 00:11:12.092 "uuid": "6c06f080-0380-4f6a-834e-6fe34584eea9", 00:11:12.092 "assigned_rate_limits": { 00:11:12.092 "rw_ios_per_sec": 0, 00:11:12.092 "rw_mbytes_per_sec": 0, 00:11:12.092 "r_mbytes_per_sec": 0, 00:11:12.092 "w_mbytes_per_sec": 0 00:11:12.092 }, 00:11:12.092 "claimed": false, 00:11:12.092 "zoned": false, 00:11:12.092 "supported_io_types": { 00:11:12.092 "read": true, 00:11:12.092 "write": true, 00:11:12.092 "unmap": true, 00:11:12.092 "flush": true, 00:11:12.092 "reset": true, 00:11:12.092 "nvme_admin": false, 00:11:12.092 "nvme_io": false, 00:11:12.092 "nvme_io_md": false, 00:11:12.092 "write_zeroes": true, 00:11:12.092 "zcopy": false, 00:11:12.092 "get_zone_info": false, 00:11:12.092 "zone_management": false, 00:11:12.092 "zone_append": false, 00:11:12.092 "compare": false, 00:11:12.092 "compare_and_write": false, 00:11:12.092 "abort": false, 00:11:12.092 "seek_hole": false, 00:11:12.092 "seek_data": false, 00:11:12.092 "copy": false, 00:11:12.092 "nvme_iov_md": false 00:11:12.092 }, 00:11:12.092 "memory_domains": [ 00:11:12.092 { 00:11:12.092 "dma_device_id": "system", 00:11:12.092 "dma_device_type": 1 00:11:12.092 }, 00:11:12.092 { 00:11:12.092 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:12.092 "dma_device_type": 2 00:11:12.092 }, 00:11:12.092 { 00:11:12.092 "dma_device_id": "system", 00:11:12.092 "dma_device_type": 1 00:11:12.092 }, 00:11:12.092 { 00:11:12.092 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:12.092 "dma_device_type": 2 00:11:12.092 }, 00:11:12.092 { 00:11:12.092 "dma_device_id": "system", 00:11:12.092 "dma_device_type": 1 00:11:12.092 }, 00:11:12.092 { 00:11:12.092 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:12.092 "dma_device_type": 2 00:11:12.092 }, 00:11:12.092 { 00:11:12.092 "dma_device_id": "system", 00:11:12.092 "dma_device_type": 1 00:11:12.092 }, 00:11:12.092 { 00:11:12.092 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:12.092 "dma_device_type": 2 00:11:12.092 } 00:11:12.092 ], 00:11:12.092 "driver_specific": { 00:11:12.092 "raid": { 00:11:12.092 "uuid": "6c06f080-0380-4f6a-834e-6fe34584eea9", 00:11:12.092 "strip_size_kb": 64, 00:11:12.092 "state": "online", 00:11:12.092 "raid_level": "concat", 00:11:12.092 "superblock": true, 00:11:12.092 "num_base_bdevs": 4, 00:11:12.092 "num_base_bdevs_discovered": 4, 00:11:12.092 "num_base_bdevs_operational": 4, 00:11:12.092 "base_bdevs_list": [ 00:11:12.092 { 00:11:12.092 "name": "pt1", 00:11:12.092 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:12.092 "is_configured": true, 00:11:12.092 "data_offset": 2048, 00:11:12.092 "data_size": 63488 00:11:12.092 }, 00:11:12.092 { 00:11:12.092 "name": "pt2", 00:11:12.092 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:12.092 "is_configured": true, 00:11:12.092 "data_offset": 2048, 00:11:12.092 "data_size": 63488 00:11:12.092 }, 00:11:12.092 { 00:11:12.092 "name": "pt3", 00:11:12.092 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:12.092 "is_configured": true, 00:11:12.092 "data_offset": 2048, 00:11:12.092 "data_size": 63488 00:11:12.092 }, 00:11:12.092 { 00:11:12.092 "name": "pt4", 00:11:12.092 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:12.092 "is_configured": true, 00:11:12.092 "data_offset": 2048, 00:11:12.092 "data_size": 63488 00:11:12.092 } 00:11:12.092 ] 00:11:12.092 } 00:11:12.092 } 00:11:12.092 }' 00:11:12.092 15:18:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:12.092 15:18:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:12.092 pt2 00:11:12.092 pt3 00:11:12.092 pt4' 00:11:12.092 15:18:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:12.092 15:18:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:12.092 15:18:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:12.092 15:18:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:12.092 15:18:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.092 15:18:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.092 15:18:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:12.092 15:18:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.352 15:18:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:12.352 15:18:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:12.352 15:18:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:12.352 15:18:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:12.352 15:18:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:12.352 15:18:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.352 15:18:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.352 15:18:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.352 15:18:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:12.352 15:18:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:12.352 15:18:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:12.352 15:18:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:12.352 15:18:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.352 15:18:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.352 15:18:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:12.352 15:18:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.352 15:18:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:12.352 15:18:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:12.352 15:18:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:12.352 15:18:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:12.352 15:18:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:11:12.352 15:18:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.352 15:18:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.352 15:18:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.352 15:18:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:12.352 15:18:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:12.352 15:18:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:12.352 15:18:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.352 15:18:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.352 15:18:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:11:12.352 [2024-11-20 15:18:58.710088] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:12.352 15:18:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.352 15:18:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=6c06f080-0380-4f6a-834e-6fe34584eea9 00:11:12.352 15:18:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 6c06f080-0380-4f6a-834e-6fe34584eea9 ']' 00:11:12.352 15:18:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:12.352 15:18:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.352 15:18:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.352 [2024-11-20 15:18:58.753721] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:12.352 [2024-11-20 15:18:58.753758] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:12.352 [2024-11-20 15:18:58.753851] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:12.352 [2024-11-20 15:18:58.753941] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:12.352 [2024-11-20 15:18:58.753963] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:11:12.352 15:18:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.352 15:18:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:12.352 15:18:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.353 15:18:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:11:12.353 15:18:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.353 15:18:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.353 15:18:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:11:12.353 15:18:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:11:12.353 15:18:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:12.353 15:18:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:11:12.353 15:18:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.353 15:18:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.353 15:18:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.353 15:18:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:12.353 15:18:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:11:12.353 15:18:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.353 15:18:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.353 15:18:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.353 15:18:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:12.353 15:18:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:11:12.353 15:18:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.353 15:18:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.353 15:18:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.353 15:18:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:12.353 15:18:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:11:12.353 15:18:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.353 15:18:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.612 15:18:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.612 15:18:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:11:12.612 15:18:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.612 15:18:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.612 15:18:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:11:12.612 15:18:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.612 15:18:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:11:12.612 15:18:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:12.612 15:18:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:11:12.612 15:18:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:12.612 15:18:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:11:12.612 15:18:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:12.612 15:18:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:11:12.612 15:18:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:12.612 15:18:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:12.612 15:18:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.612 15:18:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.612 [2024-11-20 15:18:58.893547] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:11:12.612 [2024-11-20 15:18:58.895814] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:11:12.612 [2024-11-20 15:18:58.895867] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:11:12.612 [2024-11-20 15:18:58.895904] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:11:12.612 [2024-11-20 15:18:58.895971] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:11:12.612 [2024-11-20 15:18:58.896033] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:11:12.612 [2024-11-20 15:18:58.896056] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:11:12.612 [2024-11-20 15:18:58.896078] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:11:12.612 [2024-11-20 15:18:58.896094] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:12.612 [2024-11-20 15:18:58.896107] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:11:12.612 request: 00:11:12.612 { 00:11:12.612 "name": "raid_bdev1", 00:11:12.612 "raid_level": "concat", 00:11:12.612 "base_bdevs": [ 00:11:12.612 "malloc1", 00:11:12.612 "malloc2", 00:11:12.612 "malloc3", 00:11:12.612 "malloc4" 00:11:12.612 ], 00:11:12.612 "strip_size_kb": 64, 00:11:12.612 "superblock": false, 00:11:12.612 "method": "bdev_raid_create", 00:11:12.612 "req_id": 1 00:11:12.612 } 00:11:12.612 Got JSON-RPC error response 00:11:12.612 response: 00:11:12.612 { 00:11:12.612 "code": -17, 00:11:12.612 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:11:12.612 } 00:11:12.612 15:18:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:11:12.612 15:18:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:11:12.612 15:18:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:12.612 15:18:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:12.612 15:18:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:12.612 15:18:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:12.612 15:18:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:11:12.612 15:18:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.612 15:18:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.612 15:18:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.612 15:18:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:11:12.612 15:18:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:11:12.612 15:18:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:12.612 15:18:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.612 15:18:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.612 [2024-11-20 15:18:58.953446] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:12.613 [2024-11-20 15:18:58.953712] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:12.613 [2024-11-20 15:18:58.953783] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:12.613 [2024-11-20 15:18:58.953888] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:12.613 [2024-11-20 15:18:58.956756] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:12.613 [2024-11-20 15:18:58.956952] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:12.613 [2024-11-20 15:18:58.957145] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:12.613 [2024-11-20 15:18:58.957299] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:12.613 pt1 00:11:12.613 15:18:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.613 15:18:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:11:12.613 15:18:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:12.613 15:18:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:12.613 15:18:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:12.613 15:18:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:12.613 15:18:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:12.613 15:18:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:12.613 15:18:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:12.613 15:18:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:12.613 15:18:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:12.613 15:18:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:12.613 15:18:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.613 15:18:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.613 15:18:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:12.613 15:18:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.613 15:18:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:12.613 "name": "raid_bdev1", 00:11:12.613 "uuid": "6c06f080-0380-4f6a-834e-6fe34584eea9", 00:11:12.613 "strip_size_kb": 64, 00:11:12.613 "state": "configuring", 00:11:12.613 "raid_level": "concat", 00:11:12.613 "superblock": true, 00:11:12.613 "num_base_bdevs": 4, 00:11:12.613 "num_base_bdevs_discovered": 1, 00:11:12.613 "num_base_bdevs_operational": 4, 00:11:12.613 "base_bdevs_list": [ 00:11:12.613 { 00:11:12.613 "name": "pt1", 00:11:12.613 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:12.613 "is_configured": true, 00:11:12.613 "data_offset": 2048, 00:11:12.613 "data_size": 63488 00:11:12.613 }, 00:11:12.613 { 00:11:12.613 "name": null, 00:11:12.613 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:12.613 "is_configured": false, 00:11:12.613 "data_offset": 2048, 00:11:12.613 "data_size": 63488 00:11:12.613 }, 00:11:12.613 { 00:11:12.613 "name": null, 00:11:12.613 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:12.613 "is_configured": false, 00:11:12.613 "data_offset": 2048, 00:11:12.613 "data_size": 63488 00:11:12.613 }, 00:11:12.613 { 00:11:12.613 "name": null, 00:11:12.613 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:12.613 "is_configured": false, 00:11:12.613 "data_offset": 2048, 00:11:12.613 "data_size": 63488 00:11:12.613 } 00:11:12.613 ] 00:11:12.613 }' 00:11:12.613 15:18:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:12.613 15:18:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.298 15:18:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:11:13.298 15:18:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:13.298 15:18:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.298 15:18:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.298 [2024-11-20 15:18:59.448943] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:13.298 [2024-11-20 15:18:59.449187] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:13.298 [2024-11-20 15:18:59.449220] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:11:13.298 [2024-11-20 15:18:59.449237] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:13.298 [2024-11-20 15:18:59.449749] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:13.298 [2024-11-20 15:18:59.449775] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:13.298 [2024-11-20 15:18:59.449865] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:13.298 [2024-11-20 15:18:59.449894] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:13.298 pt2 00:11:13.298 15:18:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.298 15:18:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:11:13.298 15:18:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.298 15:18:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.298 [2024-11-20 15:18:59.457023] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:11:13.298 15:18:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.298 15:18:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:11:13.298 15:18:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:13.298 15:18:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:13.298 15:18:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:13.298 15:18:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:13.298 15:18:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:13.298 15:18:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:13.298 15:18:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:13.298 15:18:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:13.298 15:18:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:13.298 15:18:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:13.298 15:18:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:13.298 15:18:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.298 15:18:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.298 15:18:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.298 15:18:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:13.298 "name": "raid_bdev1", 00:11:13.298 "uuid": "6c06f080-0380-4f6a-834e-6fe34584eea9", 00:11:13.298 "strip_size_kb": 64, 00:11:13.298 "state": "configuring", 00:11:13.298 "raid_level": "concat", 00:11:13.298 "superblock": true, 00:11:13.298 "num_base_bdevs": 4, 00:11:13.298 "num_base_bdevs_discovered": 1, 00:11:13.298 "num_base_bdevs_operational": 4, 00:11:13.298 "base_bdevs_list": [ 00:11:13.298 { 00:11:13.298 "name": "pt1", 00:11:13.298 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:13.298 "is_configured": true, 00:11:13.298 "data_offset": 2048, 00:11:13.298 "data_size": 63488 00:11:13.298 }, 00:11:13.298 { 00:11:13.298 "name": null, 00:11:13.298 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:13.298 "is_configured": false, 00:11:13.298 "data_offset": 0, 00:11:13.298 "data_size": 63488 00:11:13.298 }, 00:11:13.298 { 00:11:13.298 "name": null, 00:11:13.298 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:13.298 "is_configured": false, 00:11:13.298 "data_offset": 2048, 00:11:13.298 "data_size": 63488 00:11:13.298 }, 00:11:13.298 { 00:11:13.298 "name": null, 00:11:13.298 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:13.299 "is_configured": false, 00:11:13.299 "data_offset": 2048, 00:11:13.299 "data_size": 63488 00:11:13.299 } 00:11:13.299 ] 00:11:13.299 }' 00:11:13.299 15:18:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:13.299 15:18:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.558 15:18:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:11:13.558 15:18:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:13.558 15:18:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:13.558 15:18:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.558 15:18:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.558 [2024-11-20 15:18:59.888338] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:13.558 [2024-11-20 15:18:59.888421] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:13.558 [2024-11-20 15:18:59.888447] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:11:13.559 [2024-11-20 15:18:59.888461] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:13.559 [2024-11-20 15:18:59.888963] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:13.559 [2024-11-20 15:18:59.888984] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:13.559 [2024-11-20 15:18:59.889077] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:13.559 [2024-11-20 15:18:59.889101] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:13.559 pt2 00:11:13.559 15:18:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.559 15:18:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:13.559 15:18:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:13.559 15:18:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:13.559 15:18:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.559 15:18:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.559 [2024-11-20 15:18:59.896305] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:13.559 [2024-11-20 15:18:59.896371] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:13.559 [2024-11-20 15:18:59.896396] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:11:13.559 [2024-11-20 15:18:59.896408] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:13.559 [2024-11-20 15:18:59.896899] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:13.559 [2024-11-20 15:18:59.896919] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:13.559 [2024-11-20 15:18:59.897004] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:11:13.559 [2024-11-20 15:18:59.897034] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:13.559 pt3 00:11:13.559 15:18:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.559 15:18:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:13.559 15:18:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:13.559 15:18:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:13.559 15:18:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.559 15:18:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.559 [2024-11-20 15:18:59.904267] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:13.559 [2024-11-20 15:18:59.904324] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:13.559 [2024-11-20 15:18:59.904361] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:11:13.559 [2024-11-20 15:18:59.904379] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:13.559 [2024-11-20 15:18:59.904862] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:13.559 [2024-11-20 15:18:59.904881] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:13.559 [2024-11-20 15:18:59.904958] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:11:13.559 [2024-11-20 15:18:59.904984] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:13.559 [2024-11-20 15:18:59.905127] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:13.559 [2024-11-20 15:18:59.905137] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:13.559 [2024-11-20 15:18:59.905380] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:11:13.559 [2024-11-20 15:18:59.905528] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:13.559 [2024-11-20 15:18:59.905542] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:11:13.559 [2024-11-20 15:18:59.905686] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:13.559 pt4 00:11:13.559 15:18:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.559 15:18:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:13.559 15:18:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:13.559 15:18:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:13.559 15:18:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:13.559 15:18:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:13.559 15:18:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:13.559 15:18:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:13.559 15:18:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:13.559 15:18:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:13.559 15:18:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:13.559 15:18:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:13.559 15:18:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:13.559 15:18:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:13.559 15:18:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.559 15:18:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.559 15:18:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:13.559 15:18:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.559 15:18:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:13.559 "name": "raid_bdev1", 00:11:13.559 "uuid": "6c06f080-0380-4f6a-834e-6fe34584eea9", 00:11:13.559 "strip_size_kb": 64, 00:11:13.559 "state": "online", 00:11:13.559 "raid_level": "concat", 00:11:13.559 "superblock": true, 00:11:13.559 "num_base_bdevs": 4, 00:11:13.559 "num_base_bdevs_discovered": 4, 00:11:13.559 "num_base_bdevs_operational": 4, 00:11:13.559 "base_bdevs_list": [ 00:11:13.559 { 00:11:13.559 "name": "pt1", 00:11:13.559 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:13.559 "is_configured": true, 00:11:13.559 "data_offset": 2048, 00:11:13.559 "data_size": 63488 00:11:13.559 }, 00:11:13.559 { 00:11:13.559 "name": "pt2", 00:11:13.559 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:13.559 "is_configured": true, 00:11:13.559 "data_offset": 2048, 00:11:13.559 "data_size": 63488 00:11:13.559 }, 00:11:13.559 { 00:11:13.559 "name": "pt3", 00:11:13.559 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:13.559 "is_configured": true, 00:11:13.559 "data_offset": 2048, 00:11:13.559 "data_size": 63488 00:11:13.559 }, 00:11:13.559 { 00:11:13.559 "name": "pt4", 00:11:13.559 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:13.559 "is_configured": true, 00:11:13.559 "data_offset": 2048, 00:11:13.559 "data_size": 63488 00:11:13.559 } 00:11:13.559 ] 00:11:13.559 }' 00:11:13.559 15:18:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:13.559 15:18:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.128 15:19:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:11:14.128 15:19:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:14.128 15:19:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:14.128 15:19:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:14.128 15:19:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:14.128 15:19:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:14.128 15:19:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:14.128 15:19:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.128 15:19:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:14.128 15:19:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.128 [2024-11-20 15:19:00.324078] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:14.128 15:19:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.128 15:19:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:14.128 "name": "raid_bdev1", 00:11:14.128 "aliases": [ 00:11:14.128 "6c06f080-0380-4f6a-834e-6fe34584eea9" 00:11:14.128 ], 00:11:14.128 "product_name": "Raid Volume", 00:11:14.128 "block_size": 512, 00:11:14.128 "num_blocks": 253952, 00:11:14.128 "uuid": "6c06f080-0380-4f6a-834e-6fe34584eea9", 00:11:14.128 "assigned_rate_limits": { 00:11:14.128 "rw_ios_per_sec": 0, 00:11:14.128 "rw_mbytes_per_sec": 0, 00:11:14.129 "r_mbytes_per_sec": 0, 00:11:14.129 "w_mbytes_per_sec": 0 00:11:14.129 }, 00:11:14.129 "claimed": false, 00:11:14.129 "zoned": false, 00:11:14.129 "supported_io_types": { 00:11:14.129 "read": true, 00:11:14.129 "write": true, 00:11:14.129 "unmap": true, 00:11:14.129 "flush": true, 00:11:14.129 "reset": true, 00:11:14.129 "nvme_admin": false, 00:11:14.129 "nvme_io": false, 00:11:14.129 "nvme_io_md": false, 00:11:14.129 "write_zeroes": true, 00:11:14.129 "zcopy": false, 00:11:14.129 "get_zone_info": false, 00:11:14.129 "zone_management": false, 00:11:14.129 "zone_append": false, 00:11:14.129 "compare": false, 00:11:14.129 "compare_and_write": false, 00:11:14.129 "abort": false, 00:11:14.129 "seek_hole": false, 00:11:14.129 "seek_data": false, 00:11:14.129 "copy": false, 00:11:14.129 "nvme_iov_md": false 00:11:14.129 }, 00:11:14.129 "memory_domains": [ 00:11:14.129 { 00:11:14.129 "dma_device_id": "system", 00:11:14.129 "dma_device_type": 1 00:11:14.129 }, 00:11:14.129 { 00:11:14.129 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:14.129 "dma_device_type": 2 00:11:14.129 }, 00:11:14.129 { 00:11:14.129 "dma_device_id": "system", 00:11:14.129 "dma_device_type": 1 00:11:14.129 }, 00:11:14.129 { 00:11:14.129 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:14.129 "dma_device_type": 2 00:11:14.129 }, 00:11:14.129 { 00:11:14.129 "dma_device_id": "system", 00:11:14.129 "dma_device_type": 1 00:11:14.129 }, 00:11:14.129 { 00:11:14.129 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:14.129 "dma_device_type": 2 00:11:14.129 }, 00:11:14.129 { 00:11:14.129 "dma_device_id": "system", 00:11:14.129 "dma_device_type": 1 00:11:14.129 }, 00:11:14.129 { 00:11:14.129 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:14.129 "dma_device_type": 2 00:11:14.129 } 00:11:14.129 ], 00:11:14.129 "driver_specific": { 00:11:14.129 "raid": { 00:11:14.129 "uuid": "6c06f080-0380-4f6a-834e-6fe34584eea9", 00:11:14.129 "strip_size_kb": 64, 00:11:14.129 "state": "online", 00:11:14.129 "raid_level": "concat", 00:11:14.129 "superblock": true, 00:11:14.129 "num_base_bdevs": 4, 00:11:14.129 "num_base_bdevs_discovered": 4, 00:11:14.129 "num_base_bdevs_operational": 4, 00:11:14.129 "base_bdevs_list": [ 00:11:14.129 { 00:11:14.129 "name": "pt1", 00:11:14.129 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:14.129 "is_configured": true, 00:11:14.129 "data_offset": 2048, 00:11:14.129 "data_size": 63488 00:11:14.129 }, 00:11:14.129 { 00:11:14.129 "name": "pt2", 00:11:14.129 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:14.129 "is_configured": true, 00:11:14.129 "data_offset": 2048, 00:11:14.129 "data_size": 63488 00:11:14.129 }, 00:11:14.129 { 00:11:14.129 "name": "pt3", 00:11:14.129 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:14.129 "is_configured": true, 00:11:14.129 "data_offset": 2048, 00:11:14.129 "data_size": 63488 00:11:14.129 }, 00:11:14.129 { 00:11:14.129 "name": "pt4", 00:11:14.129 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:14.129 "is_configured": true, 00:11:14.129 "data_offset": 2048, 00:11:14.129 "data_size": 63488 00:11:14.129 } 00:11:14.129 ] 00:11:14.129 } 00:11:14.129 } 00:11:14.129 }' 00:11:14.129 15:19:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:14.129 15:19:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:14.129 pt2 00:11:14.129 pt3 00:11:14.129 pt4' 00:11:14.129 15:19:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:14.129 15:19:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:14.129 15:19:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:14.129 15:19:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:14.129 15:19:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.129 15:19:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.129 15:19:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:14.129 15:19:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.129 15:19:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:14.129 15:19:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:14.129 15:19:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:14.129 15:19:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:14.129 15:19:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:14.129 15:19:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.129 15:19:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.129 15:19:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.129 15:19:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:14.129 15:19:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:14.129 15:19:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:14.129 15:19:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:14.129 15:19:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.129 15:19:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.129 15:19:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:14.129 15:19:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.129 15:19:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:14.129 15:19:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:14.129 15:19:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:14.129 15:19:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:14.129 15:19:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:11:14.129 15:19:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.129 15:19:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.390 15:19:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.390 15:19:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:14.390 15:19:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:14.390 15:19:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:14.390 15:19:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.390 15:19:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.390 15:19:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:11:14.390 [2024-11-20 15:19:00.635599] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:14.390 15:19:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.390 15:19:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 6c06f080-0380-4f6a-834e-6fe34584eea9 '!=' 6c06f080-0380-4f6a-834e-6fe34584eea9 ']' 00:11:14.390 15:19:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:11:14.390 15:19:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:14.390 15:19:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:14.390 15:19:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 72451 00:11:14.390 15:19:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 72451 ']' 00:11:14.390 15:19:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 72451 00:11:14.390 15:19:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:11:14.390 15:19:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:14.390 15:19:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72451 00:11:14.390 killing process with pid 72451 00:11:14.390 15:19:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:14.390 15:19:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:14.390 15:19:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72451' 00:11:14.390 15:19:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 72451 00:11:14.390 [2024-11-20 15:19:00.710603] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:14.390 [2024-11-20 15:19:00.710714] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:14.390 15:19:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 72451 00:11:14.390 [2024-11-20 15:19:00.710807] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:14.390 [2024-11-20 15:19:00.710820] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:11:14.649 [2024-11-20 15:19:01.116598] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:16.027 15:19:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:11:16.027 00:11:16.027 real 0m5.594s 00:11:16.027 user 0m7.941s 00:11:16.027 sys 0m1.117s 00:11:16.027 15:19:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:16.027 15:19:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.027 ************************************ 00:11:16.027 END TEST raid_superblock_test 00:11:16.027 ************************************ 00:11:16.027 15:19:02 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 4 read 00:11:16.027 15:19:02 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:16.027 15:19:02 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:16.027 15:19:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:16.027 ************************************ 00:11:16.027 START TEST raid_read_error_test 00:11:16.027 ************************************ 00:11:16.028 15:19:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 4 read 00:11:16.028 15:19:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:11:16.028 15:19:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:11:16.028 15:19:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:11:16.028 15:19:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:16.028 15:19:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:16.028 15:19:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:16.028 15:19:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:16.028 15:19:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:16.028 15:19:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:16.028 15:19:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:16.028 15:19:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:16.028 15:19:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:16.028 15:19:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:16.028 15:19:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:16.028 15:19:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:11:16.028 15:19:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:16.028 15:19:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:16.028 15:19:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:16.028 15:19:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:16.028 15:19:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:16.028 15:19:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:16.028 15:19:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:16.028 15:19:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:16.028 15:19:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:16.028 15:19:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:11:16.028 15:19:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:11:16.028 15:19:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:11:16.028 15:19:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:16.028 15:19:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.tsGjlVtSNQ 00:11:16.028 15:19:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=72716 00:11:16.028 15:19:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 72716 00:11:16.028 15:19:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 72716 ']' 00:11:16.028 15:19:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:16.028 15:19:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:16.028 15:19:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:16.028 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:16.028 15:19:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:16.028 15:19:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:16.028 15:19:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.028 [2024-11-20 15:19:02.484334] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:11:16.028 [2024-11-20 15:19:02.484470] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72716 ] 00:11:16.287 [2024-11-20 15:19:02.666877] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:16.546 [2024-11-20 15:19:02.792326] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:16.546 [2024-11-20 15:19:03.012383] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:16.546 [2024-11-20 15:19:03.012439] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:17.114 15:19:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:17.114 15:19:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:11:17.114 15:19:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:17.114 15:19:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:17.115 15:19:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.115 15:19:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.115 BaseBdev1_malloc 00:11:17.115 15:19:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.115 15:19:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:17.115 15:19:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.115 15:19:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.115 true 00:11:17.115 15:19:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.115 15:19:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:17.115 15:19:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.115 15:19:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.115 [2024-11-20 15:19:03.429084] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:17.115 [2024-11-20 15:19:03.429143] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:17.115 [2024-11-20 15:19:03.429169] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:17.115 [2024-11-20 15:19:03.429183] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:17.115 [2024-11-20 15:19:03.431724] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:17.115 [2024-11-20 15:19:03.431775] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:17.115 BaseBdev1 00:11:17.115 15:19:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.115 15:19:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:17.115 15:19:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:17.115 15:19:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.115 15:19:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.115 BaseBdev2_malloc 00:11:17.115 15:19:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.115 15:19:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:17.115 15:19:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.115 15:19:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.115 true 00:11:17.115 15:19:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.115 15:19:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:17.115 15:19:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.115 15:19:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.115 [2024-11-20 15:19:03.491170] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:17.115 [2024-11-20 15:19:03.491239] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:17.115 [2024-11-20 15:19:03.491261] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:17.115 [2024-11-20 15:19:03.491277] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:17.115 [2024-11-20 15:19:03.493755] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:17.115 [2024-11-20 15:19:03.493801] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:17.115 BaseBdev2 00:11:17.115 15:19:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.115 15:19:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:17.115 15:19:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:17.115 15:19:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.115 15:19:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.115 BaseBdev3_malloc 00:11:17.115 15:19:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.115 15:19:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:17.115 15:19:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.115 15:19:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.115 true 00:11:17.115 15:19:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.115 15:19:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:17.115 15:19:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.115 15:19:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.115 [2024-11-20 15:19:03.568230] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:17.115 [2024-11-20 15:19:03.568300] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:17.115 [2024-11-20 15:19:03.568325] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:17.115 [2024-11-20 15:19:03.568339] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:17.115 [2024-11-20 15:19:03.570995] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:17.115 [2024-11-20 15:19:03.571043] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:17.115 BaseBdev3 00:11:17.115 15:19:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.115 15:19:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:17.115 15:19:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:11:17.115 15:19:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.115 15:19:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.375 BaseBdev4_malloc 00:11:17.375 15:19:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.375 15:19:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:11:17.375 15:19:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.375 15:19:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.375 true 00:11:17.375 15:19:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.375 15:19:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:11:17.375 15:19:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.375 15:19:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.375 [2024-11-20 15:19:03.631698] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:11:17.375 [2024-11-20 15:19:03.631765] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:17.375 [2024-11-20 15:19:03.631790] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:17.375 [2024-11-20 15:19:03.631805] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:17.375 [2024-11-20 15:19:03.634364] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:17.375 [2024-11-20 15:19:03.634416] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:11:17.375 BaseBdev4 00:11:17.375 15:19:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.375 15:19:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:11:17.375 15:19:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.375 15:19:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.375 [2024-11-20 15:19:03.643779] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:17.375 [2024-11-20 15:19:03.645970] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:17.375 [2024-11-20 15:19:03.646053] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:17.375 [2024-11-20 15:19:03.646118] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:17.375 [2024-11-20 15:19:03.646341] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:11:17.375 [2024-11-20 15:19:03.646356] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:17.375 [2024-11-20 15:19:03.646648] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:11:17.375 [2024-11-20 15:19:03.646871] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:11:17.375 [2024-11-20 15:19:03.646885] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:11:17.375 [2024-11-20 15:19:03.647064] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:17.375 15:19:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.375 15:19:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:17.375 15:19:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:17.375 15:19:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:17.375 15:19:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:17.375 15:19:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:17.375 15:19:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:17.375 15:19:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:17.375 15:19:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:17.375 15:19:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:17.375 15:19:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:17.375 15:19:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:17.375 15:19:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:17.375 15:19:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.375 15:19:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.375 15:19:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.375 15:19:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:17.375 "name": "raid_bdev1", 00:11:17.375 "uuid": "ccf4bd57-ba53-43a3-896b-bec82affd47b", 00:11:17.375 "strip_size_kb": 64, 00:11:17.375 "state": "online", 00:11:17.375 "raid_level": "concat", 00:11:17.375 "superblock": true, 00:11:17.375 "num_base_bdevs": 4, 00:11:17.375 "num_base_bdevs_discovered": 4, 00:11:17.375 "num_base_bdevs_operational": 4, 00:11:17.376 "base_bdevs_list": [ 00:11:17.376 { 00:11:17.376 "name": "BaseBdev1", 00:11:17.376 "uuid": "72c8e1bf-4341-5168-a92e-b1394c03ebb0", 00:11:17.376 "is_configured": true, 00:11:17.376 "data_offset": 2048, 00:11:17.376 "data_size": 63488 00:11:17.376 }, 00:11:17.376 { 00:11:17.376 "name": "BaseBdev2", 00:11:17.376 "uuid": "2433270e-90e5-5c25-b69f-2f03accb8b0d", 00:11:17.376 "is_configured": true, 00:11:17.376 "data_offset": 2048, 00:11:17.376 "data_size": 63488 00:11:17.376 }, 00:11:17.376 { 00:11:17.376 "name": "BaseBdev3", 00:11:17.376 "uuid": "fd8ace09-a43e-5acb-82bc-c41c71b3521b", 00:11:17.376 "is_configured": true, 00:11:17.376 "data_offset": 2048, 00:11:17.376 "data_size": 63488 00:11:17.376 }, 00:11:17.376 { 00:11:17.376 "name": "BaseBdev4", 00:11:17.376 "uuid": "658b8775-038c-5ced-a729-655d1016d594", 00:11:17.376 "is_configured": true, 00:11:17.376 "data_offset": 2048, 00:11:17.376 "data_size": 63488 00:11:17.376 } 00:11:17.376 ] 00:11:17.376 }' 00:11:17.376 15:19:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:17.376 15:19:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.638 15:19:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:17.638 15:19:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:17.897 [2024-11-20 15:19:04.200378] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:11:18.837 15:19:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:11:18.837 15:19:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.837 15:19:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.837 15:19:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.837 15:19:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:18.837 15:19:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:11:18.837 15:19:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:11:18.837 15:19:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:18.837 15:19:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:18.837 15:19:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:18.837 15:19:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:18.837 15:19:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:18.837 15:19:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:18.837 15:19:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:18.837 15:19:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:18.837 15:19:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:18.837 15:19:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:18.837 15:19:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:18.837 15:19:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:18.837 15:19:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.837 15:19:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.837 15:19:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.837 15:19:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:18.837 "name": "raid_bdev1", 00:11:18.837 "uuid": "ccf4bd57-ba53-43a3-896b-bec82affd47b", 00:11:18.837 "strip_size_kb": 64, 00:11:18.837 "state": "online", 00:11:18.837 "raid_level": "concat", 00:11:18.837 "superblock": true, 00:11:18.837 "num_base_bdevs": 4, 00:11:18.837 "num_base_bdevs_discovered": 4, 00:11:18.837 "num_base_bdevs_operational": 4, 00:11:18.837 "base_bdevs_list": [ 00:11:18.837 { 00:11:18.837 "name": "BaseBdev1", 00:11:18.837 "uuid": "72c8e1bf-4341-5168-a92e-b1394c03ebb0", 00:11:18.837 "is_configured": true, 00:11:18.837 "data_offset": 2048, 00:11:18.837 "data_size": 63488 00:11:18.837 }, 00:11:18.837 { 00:11:18.837 "name": "BaseBdev2", 00:11:18.837 "uuid": "2433270e-90e5-5c25-b69f-2f03accb8b0d", 00:11:18.837 "is_configured": true, 00:11:18.837 "data_offset": 2048, 00:11:18.837 "data_size": 63488 00:11:18.837 }, 00:11:18.837 { 00:11:18.837 "name": "BaseBdev3", 00:11:18.837 "uuid": "fd8ace09-a43e-5acb-82bc-c41c71b3521b", 00:11:18.837 "is_configured": true, 00:11:18.837 "data_offset": 2048, 00:11:18.837 "data_size": 63488 00:11:18.837 }, 00:11:18.837 { 00:11:18.837 "name": "BaseBdev4", 00:11:18.837 "uuid": "658b8775-038c-5ced-a729-655d1016d594", 00:11:18.837 "is_configured": true, 00:11:18.837 "data_offset": 2048, 00:11:18.837 "data_size": 63488 00:11:18.837 } 00:11:18.837 ] 00:11:18.837 }' 00:11:18.837 15:19:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:18.837 15:19:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.096 15:19:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:19.096 15:19:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.096 15:19:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.096 [2024-11-20 15:19:05.492756] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:19.096 [2024-11-20 15:19:05.492800] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:19.096 [2024-11-20 15:19:05.495637] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:19.096 [2024-11-20 15:19:05.495738] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:19.096 [2024-11-20 15:19:05.495786] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:19.096 [2024-11-20 15:19:05.495802] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:11:19.096 { 00:11:19.096 "results": [ 00:11:19.096 { 00:11:19.096 "job": "raid_bdev1", 00:11:19.096 "core_mask": "0x1", 00:11:19.096 "workload": "randrw", 00:11:19.096 "percentage": 50, 00:11:19.096 "status": "finished", 00:11:19.096 "queue_depth": 1, 00:11:19.096 "io_size": 131072, 00:11:19.096 "runtime": 1.292238, 00:11:19.096 "iops": 15300.587043563182, 00:11:19.096 "mibps": 1912.5733804453978, 00:11:19.096 "io_failed": 1, 00:11:19.096 "io_timeout": 0, 00:11:19.096 "avg_latency_us": 90.33622117052644, 00:11:19.096 "min_latency_us": 27.347791164658634, 00:11:19.096 "max_latency_us": 1506.8016064257029 00:11:19.096 } 00:11:19.096 ], 00:11:19.096 "core_count": 1 00:11:19.096 } 00:11:19.096 15:19:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.096 15:19:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 72716 00:11:19.096 15:19:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 72716 ']' 00:11:19.096 15:19:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 72716 00:11:19.096 15:19:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:11:19.096 15:19:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:19.096 15:19:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72716 00:11:19.096 15:19:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:19.096 15:19:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:19.096 killing process with pid 72716 00:11:19.096 15:19:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72716' 00:11:19.096 15:19:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 72716 00:11:19.096 [2024-11-20 15:19:05.548159] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:19.096 15:19:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 72716 00:11:19.662 [2024-11-20 15:19:05.895159] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:21.038 15:19:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.tsGjlVtSNQ 00:11:21.038 15:19:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:21.038 15:19:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:21.038 15:19:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.77 00:11:21.038 15:19:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:11:21.038 15:19:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:21.038 15:19:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:21.038 15:19:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.77 != \0\.\0\0 ]] 00:11:21.038 00:11:21.038 real 0m4.782s 00:11:21.038 user 0m5.634s 00:11:21.038 sys 0m0.671s 00:11:21.038 15:19:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:21.038 15:19:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.038 ************************************ 00:11:21.038 END TEST raid_read_error_test 00:11:21.038 ************************************ 00:11:21.038 15:19:07 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 4 write 00:11:21.038 15:19:07 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:21.038 15:19:07 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:21.038 15:19:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:21.038 ************************************ 00:11:21.038 START TEST raid_write_error_test 00:11:21.038 ************************************ 00:11:21.038 15:19:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 4 write 00:11:21.038 15:19:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:11:21.038 15:19:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:11:21.038 15:19:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:11:21.038 15:19:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:21.038 15:19:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:21.038 15:19:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:21.038 15:19:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:21.038 15:19:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:21.038 15:19:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:21.038 15:19:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:21.038 15:19:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:21.038 15:19:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:21.038 15:19:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:21.038 15:19:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:21.038 15:19:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:11:21.038 15:19:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:21.038 15:19:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:21.038 15:19:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:21.038 15:19:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:21.038 15:19:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:21.038 15:19:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:21.038 15:19:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:21.038 15:19:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:21.038 15:19:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:21.038 15:19:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:11:21.038 15:19:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:11:21.038 15:19:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:11:21.038 15:19:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:21.038 15:19:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.6loCnSS58s 00:11:21.038 15:19:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=72866 00:11:21.038 15:19:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 72866 00:11:21.038 15:19:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:21.038 15:19:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 72866 ']' 00:11:21.038 15:19:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:21.038 15:19:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:21.038 15:19:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:21.038 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:21.038 15:19:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:21.038 15:19:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.038 [2024-11-20 15:19:07.348001] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:11:21.038 [2024-11-20 15:19:07.348136] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72866 ] 00:11:21.297 [2024-11-20 15:19:07.533201] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:21.297 [2024-11-20 15:19:07.653751] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:21.556 [2024-11-20 15:19:07.879160] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:21.556 [2024-11-20 15:19:07.879219] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:21.814 15:19:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:21.814 15:19:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:11:21.814 15:19:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:21.814 15:19:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:21.814 15:19:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.814 15:19:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.814 BaseBdev1_malloc 00:11:21.814 15:19:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.814 15:19:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:21.814 15:19:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.814 15:19:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.814 true 00:11:21.814 15:19:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.814 15:19:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:21.814 15:19:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.814 15:19:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.814 [2024-11-20 15:19:08.250941] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:21.814 [2024-11-20 15:19:08.251142] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:21.814 [2024-11-20 15:19:08.251179] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:21.814 [2024-11-20 15:19:08.251194] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:21.814 [2024-11-20 15:19:08.253809] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:21.814 [2024-11-20 15:19:08.253956] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:21.814 BaseBdev1 00:11:21.814 15:19:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.814 15:19:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:21.814 15:19:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:21.814 15:19:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.814 15:19:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.132 BaseBdev2_malloc 00:11:22.132 15:19:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.132 15:19:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:22.132 15:19:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.132 15:19:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.132 true 00:11:22.132 15:19:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.132 15:19:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:22.132 15:19:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.132 15:19:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.132 [2024-11-20 15:19:08.320009] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:22.132 [2024-11-20 15:19:08.320087] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:22.132 [2024-11-20 15:19:08.320111] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:22.132 [2024-11-20 15:19:08.320125] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:22.132 [2024-11-20 15:19:08.322728] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:22.132 [2024-11-20 15:19:08.322772] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:22.132 BaseBdev2 00:11:22.132 15:19:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.132 15:19:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:22.132 15:19:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:22.132 15:19:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.132 15:19:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.132 BaseBdev3_malloc 00:11:22.132 15:19:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.132 15:19:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:22.132 15:19:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.132 15:19:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.132 true 00:11:22.132 15:19:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.132 15:19:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:22.132 15:19:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.132 15:19:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.132 [2024-11-20 15:19:08.406383] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:22.132 [2024-11-20 15:19:08.406446] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:22.132 [2024-11-20 15:19:08.406471] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:22.133 [2024-11-20 15:19:08.406485] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:22.133 [2024-11-20 15:19:08.409110] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:22.133 [2024-11-20 15:19:08.409154] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:22.133 BaseBdev3 00:11:22.133 15:19:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.133 15:19:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:22.133 15:19:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:11:22.133 15:19:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.133 15:19:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.133 BaseBdev4_malloc 00:11:22.133 15:19:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.133 15:19:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:11:22.133 15:19:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.133 15:19:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.133 true 00:11:22.133 15:19:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.133 15:19:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:11:22.133 15:19:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.133 15:19:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.133 [2024-11-20 15:19:08.473162] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:11:22.133 [2024-11-20 15:19:08.473215] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:22.133 [2024-11-20 15:19:08.473255] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:22.133 [2024-11-20 15:19:08.473282] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:22.133 [2024-11-20 15:19:08.475792] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:22.133 [2024-11-20 15:19:08.475834] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:11:22.133 BaseBdev4 00:11:22.133 15:19:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.133 15:19:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:11:22.133 15:19:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.133 15:19:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.133 [2024-11-20 15:19:08.485235] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:22.133 [2024-11-20 15:19:08.487345] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:22.133 [2024-11-20 15:19:08.487432] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:22.133 [2024-11-20 15:19:08.487497] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:22.133 [2024-11-20 15:19:08.487755] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:11:22.133 [2024-11-20 15:19:08.487772] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:22.133 [2024-11-20 15:19:08.488068] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:11:22.133 [2024-11-20 15:19:08.488250] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:11:22.133 [2024-11-20 15:19:08.488271] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:11:22.133 [2024-11-20 15:19:08.488445] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:22.133 15:19:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.133 15:19:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:22.133 15:19:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:22.133 15:19:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:22.133 15:19:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:22.133 15:19:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:22.133 15:19:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:22.133 15:19:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:22.133 15:19:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:22.133 15:19:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:22.133 15:19:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:22.133 15:19:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:22.133 15:19:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.133 15:19:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.133 15:19:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:22.133 15:19:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.133 15:19:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:22.133 "name": "raid_bdev1", 00:11:22.133 "uuid": "079186c7-97a9-4224-af46-6b604c11e6dd", 00:11:22.133 "strip_size_kb": 64, 00:11:22.133 "state": "online", 00:11:22.133 "raid_level": "concat", 00:11:22.133 "superblock": true, 00:11:22.133 "num_base_bdevs": 4, 00:11:22.133 "num_base_bdevs_discovered": 4, 00:11:22.133 "num_base_bdevs_operational": 4, 00:11:22.133 "base_bdevs_list": [ 00:11:22.133 { 00:11:22.133 "name": "BaseBdev1", 00:11:22.133 "uuid": "d603aaae-5f05-5b3d-88dd-f48ac6a92c1b", 00:11:22.133 "is_configured": true, 00:11:22.133 "data_offset": 2048, 00:11:22.133 "data_size": 63488 00:11:22.133 }, 00:11:22.133 { 00:11:22.133 "name": "BaseBdev2", 00:11:22.133 "uuid": "1e6ebebb-9fe0-545a-a83c-910d79610de4", 00:11:22.133 "is_configured": true, 00:11:22.133 "data_offset": 2048, 00:11:22.133 "data_size": 63488 00:11:22.133 }, 00:11:22.133 { 00:11:22.133 "name": "BaseBdev3", 00:11:22.133 "uuid": "6899e736-3120-5b70-b4b6-692d41af15d4", 00:11:22.133 "is_configured": true, 00:11:22.133 "data_offset": 2048, 00:11:22.133 "data_size": 63488 00:11:22.133 }, 00:11:22.133 { 00:11:22.133 "name": "BaseBdev4", 00:11:22.133 "uuid": "0e472d5f-5fac-5827-965a-1dd27f895c5a", 00:11:22.133 "is_configured": true, 00:11:22.133 "data_offset": 2048, 00:11:22.133 "data_size": 63488 00:11:22.133 } 00:11:22.133 ] 00:11:22.133 }' 00:11:22.133 15:19:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:22.133 15:19:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.409 15:19:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:22.409 15:19:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:22.669 [2024-11-20 15:19:08.986059] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:11:23.620 15:19:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:11:23.620 15:19:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.620 15:19:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.620 15:19:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.620 15:19:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:23.620 15:19:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:11:23.620 15:19:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:11:23.620 15:19:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:23.620 15:19:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:23.620 15:19:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:23.620 15:19:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:23.620 15:19:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:23.620 15:19:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:23.620 15:19:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:23.620 15:19:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:23.620 15:19:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:23.620 15:19:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:23.620 15:19:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:23.620 15:19:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:23.620 15:19:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.620 15:19:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.620 15:19:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.620 15:19:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:23.620 "name": "raid_bdev1", 00:11:23.620 "uuid": "079186c7-97a9-4224-af46-6b604c11e6dd", 00:11:23.620 "strip_size_kb": 64, 00:11:23.620 "state": "online", 00:11:23.620 "raid_level": "concat", 00:11:23.620 "superblock": true, 00:11:23.620 "num_base_bdevs": 4, 00:11:23.620 "num_base_bdevs_discovered": 4, 00:11:23.620 "num_base_bdevs_operational": 4, 00:11:23.620 "base_bdevs_list": [ 00:11:23.620 { 00:11:23.620 "name": "BaseBdev1", 00:11:23.620 "uuid": "d603aaae-5f05-5b3d-88dd-f48ac6a92c1b", 00:11:23.620 "is_configured": true, 00:11:23.620 "data_offset": 2048, 00:11:23.620 "data_size": 63488 00:11:23.620 }, 00:11:23.620 { 00:11:23.620 "name": "BaseBdev2", 00:11:23.620 "uuid": "1e6ebebb-9fe0-545a-a83c-910d79610de4", 00:11:23.620 "is_configured": true, 00:11:23.620 "data_offset": 2048, 00:11:23.620 "data_size": 63488 00:11:23.620 }, 00:11:23.620 { 00:11:23.620 "name": "BaseBdev3", 00:11:23.620 "uuid": "6899e736-3120-5b70-b4b6-692d41af15d4", 00:11:23.620 "is_configured": true, 00:11:23.620 "data_offset": 2048, 00:11:23.620 "data_size": 63488 00:11:23.620 }, 00:11:23.620 { 00:11:23.620 "name": "BaseBdev4", 00:11:23.620 "uuid": "0e472d5f-5fac-5827-965a-1dd27f895c5a", 00:11:23.620 "is_configured": true, 00:11:23.620 "data_offset": 2048, 00:11:23.620 "data_size": 63488 00:11:23.620 } 00:11:23.620 ] 00:11:23.620 }' 00:11:23.620 15:19:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:23.620 15:19:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.879 15:19:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:23.879 15:19:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.879 15:19:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.879 [2024-11-20 15:19:10.304649] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:23.879 [2024-11-20 15:19:10.304705] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:23.879 [2024-11-20 15:19:10.307612] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:23.879 [2024-11-20 15:19:10.307695] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:23.879 [2024-11-20 15:19:10.307743] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:23.879 [2024-11-20 15:19:10.307758] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:11:23.879 { 00:11:23.879 "results": [ 00:11:23.879 { 00:11:23.879 "job": "raid_bdev1", 00:11:23.879 "core_mask": "0x1", 00:11:23.879 "workload": "randrw", 00:11:23.879 "percentage": 50, 00:11:23.879 "status": "finished", 00:11:23.879 "queue_depth": 1, 00:11:23.879 "io_size": 131072, 00:11:23.879 "runtime": 1.318503, 00:11:23.879 "iops": 15086.806780113508, 00:11:23.879 "mibps": 1885.8508475141884, 00:11:23.879 "io_failed": 1, 00:11:23.879 "io_timeout": 0, 00:11:23.879 "avg_latency_us": 91.5637629995173, 00:11:23.879 "min_latency_us": 27.142168674698794, 00:11:23.879 "max_latency_us": 1526.5413654618474 00:11:23.879 } 00:11:23.879 ], 00:11:23.879 "core_count": 1 00:11:23.879 } 00:11:23.879 15:19:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.879 15:19:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 72866 00:11:23.879 15:19:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 72866 ']' 00:11:23.879 15:19:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 72866 00:11:23.879 15:19:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:11:23.879 15:19:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:23.879 15:19:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72866 00:11:23.879 killing process with pid 72866 00:11:23.879 15:19:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:23.879 15:19:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:23.879 15:19:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72866' 00:11:23.879 15:19:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 72866 00:11:23.879 [2024-11-20 15:19:10.349343] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:23.879 15:19:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 72866 00:11:24.447 [2024-11-20 15:19:10.685757] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:25.825 15:19:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:25.825 15:19:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.6loCnSS58s 00:11:25.825 15:19:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:25.825 15:19:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.76 00:11:25.825 15:19:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:11:25.825 15:19:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:25.825 15:19:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:25.825 15:19:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.76 != \0\.\0\0 ]] 00:11:25.825 00:11:25.825 real 0m4.704s 00:11:25.825 user 0m5.465s 00:11:25.825 sys 0m0.629s 00:11:25.825 15:19:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:25.825 15:19:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.825 ************************************ 00:11:25.825 END TEST raid_write_error_test 00:11:25.825 ************************************ 00:11:25.825 15:19:11 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:11:25.825 15:19:11 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:11:25.825 15:19:11 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:25.825 15:19:11 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:25.825 15:19:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:25.825 ************************************ 00:11:25.825 START TEST raid_state_function_test 00:11:25.825 ************************************ 00:11:25.825 15:19:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 4 false 00:11:25.825 15:19:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:11:25.825 15:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:11:25.825 15:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:11:25.825 15:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:25.825 15:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:25.825 15:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:25.825 15:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:25.825 15:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:25.825 15:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:25.825 15:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:25.825 15:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:25.825 15:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:25.825 15:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:25.825 15:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:25.825 15:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:25.825 15:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:11:25.825 15:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:25.825 15:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:25.825 15:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:25.825 15:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:25.825 15:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:25.825 15:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:25.825 15:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:25.825 15:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:25.825 15:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:11:25.825 15:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:11:25.825 15:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:11:25.826 15:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:11:25.826 15:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=73005 00:11:25.826 15:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:25.826 15:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 73005' 00:11:25.826 Process raid pid: 73005 00:11:25.826 15:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 73005 00:11:25.826 15:19:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 73005 ']' 00:11:25.826 15:19:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:25.826 15:19:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:25.826 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:25.826 15:19:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:25.826 15:19:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:25.826 15:19:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.826 [2024-11-20 15:19:12.115228] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:11:25.826 [2024-11-20 15:19:12.115373] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:25.826 [2024-11-20 15:19:12.296446] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:26.085 [2024-11-20 15:19:12.426227] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:26.343 [2024-11-20 15:19:12.652641] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:26.343 [2024-11-20 15:19:12.652708] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:26.618 15:19:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:26.618 15:19:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:11:26.618 15:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:26.618 15:19:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.618 15:19:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.618 [2024-11-20 15:19:12.996476] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:26.618 [2024-11-20 15:19:12.996544] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:26.618 [2024-11-20 15:19:12.996557] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:26.618 [2024-11-20 15:19:12.996570] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:26.618 [2024-11-20 15:19:12.996579] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:26.618 [2024-11-20 15:19:12.996591] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:26.618 [2024-11-20 15:19:12.996606] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:26.618 [2024-11-20 15:19:12.996619] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:26.618 15:19:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.618 15:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:26.618 15:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:26.618 15:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:26.618 15:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:26.618 15:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:26.618 15:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:26.618 15:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:26.618 15:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:26.618 15:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:26.618 15:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:26.618 15:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:26.618 15:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:26.618 15:19:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.618 15:19:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.618 15:19:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.618 15:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:26.618 "name": "Existed_Raid", 00:11:26.618 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:26.618 "strip_size_kb": 0, 00:11:26.618 "state": "configuring", 00:11:26.618 "raid_level": "raid1", 00:11:26.618 "superblock": false, 00:11:26.618 "num_base_bdevs": 4, 00:11:26.618 "num_base_bdevs_discovered": 0, 00:11:26.618 "num_base_bdevs_operational": 4, 00:11:26.618 "base_bdevs_list": [ 00:11:26.618 { 00:11:26.618 "name": "BaseBdev1", 00:11:26.618 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:26.618 "is_configured": false, 00:11:26.618 "data_offset": 0, 00:11:26.618 "data_size": 0 00:11:26.618 }, 00:11:26.618 { 00:11:26.618 "name": "BaseBdev2", 00:11:26.618 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:26.618 "is_configured": false, 00:11:26.618 "data_offset": 0, 00:11:26.618 "data_size": 0 00:11:26.618 }, 00:11:26.618 { 00:11:26.618 "name": "BaseBdev3", 00:11:26.618 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:26.618 "is_configured": false, 00:11:26.618 "data_offset": 0, 00:11:26.618 "data_size": 0 00:11:26.618 }, 00:11:26.618 { 00:11:26.618 "name": "BaseBdev4", 00:11:26.618 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:26.618 "is_configured": false, 00:11:26.618 "data_offset": 0, 00:11:26.618 "data_size": 0 00:11:26.618 } 00:11:26.618 ] 00:11:26.618 }' 00:11:26.618 15:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:26.618 15:19:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.211 15:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:27.211 15:19:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.211 15:19:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.211 [2024-11-20 15:19:13.451819] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:27.211 [2024-11-20 15:19:13.451879] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:27.211 15:19:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.211 15:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:27.211 15:19:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.211 15:19:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.211 [2024-11-20 15:19:13.459800] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:27.211 [2024-11-20 15:19:13.459855] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:27.211 [2024-11-20 15:19:13.459867] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:27.211 [2024-11-20 15:19:13.459881] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:27.211 [2024-11-20 15:19:13.459889] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:27.211 [2024-11-20 15:19:13.459903] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:27.211 [2024-11-20 15:19:13.459911] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:27.211 [2024-11-20 15:19:13.459937] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:27.211 15:19:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.211 15:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:27.211 15:19:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.211 15:19:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.211 [2024-11-20 15:19:13.509223] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:27.211 BaseBdev1 00:11:27.211 15:19:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.211 15:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:27.212 15:19:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:27.212 15:19:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:27.212 15:19:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:27.212 15:19:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:27.212 15:19:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:27.212 15:19:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:27.212 15:19:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.212 15:19:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.212 15:19:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.212 15:19:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:27.212 15:19:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.212 15:19:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.212 [ 00:11:27.212 { 00:11:27.212 "name": "BaseBdev1", 00:11:27.212 "aliases": [ 00:11:27.212 "4c4eb643-1c5f-41fb-bf62-f974546ee633" 00:11:27.212 ], 00:11:27.212 "product_name": "Malloc disk", 00:11:27.212 "block_size": 512, 00:11:27.212 "num_blocks": 65536, 00:11:27.212 "uuid": "4c4eb643-1c5f-41fb-bf62-f974546ee633", 00:11:27.212 "assigned_rate_limits": { 00:11:27.212 "rw_ios_per_sec": 0, 00:11:27.212 "rw_mbytes_per_sec": 0, 00:11:27.212 "r_mbytes_per_sec": 0, 00:11:27.212 "w_mbytes_per_sec": 0 00:11:27.212 }, 00:11:27.212 "claimed": true, 00:11:27.212 "claim_type": "exclusive_write", 00:11:27.212 "zoned": false, 00:11:27.212 "supported_io_types": { 00:11:27.212 "read": true, 00:11:27.212 "write": true, 00:11:27.212 "unmap": true, 00:11:27.212 "flush": true, 00:11:27.212 "reset": true, 00:11:27.212 "nvme_admin": false, 00:11:27.212 "nvme_io": false, 00:11:27.212 "nvme_io_md": false, 00:11:27.212 "write_zeroes": true, 00:11:27.212 "zcopy": true, 00:11:27.212 "get_zone_info": false, 00:11:27.212 "zone_management": false, 00:11:27.212 "zone_append": false, 00:11:27.212 "compare": false, 00:11:27.212 "compare_and_write": false, 00:11:27.212 "abort": true, 00:11:27.212 "seek_hole": false, 00:11:27.212 "seek_data": false, 00:11:27.212 "copy": true, 00:11:27.212 "nvme_iov_md": false 00:11:27.212 }, 00:11:27.212 "memory_domains": [ 00:11:27.212 { 00:11:27.212 "dma_device_id": "system", 00:11:27.212 "dma_device_type": 1 00:11:27.212 }, 00:11:27.212 { 00:11:27.212 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:27.212 "dma_device_type": 2 00:11:27.212 } 00:11:27.212 ], 00:11:27.212 "driver_specific": {} 00:11:27.212 } 00:11:27.212 ] 00:11:27.212 15:19:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.212 15:19:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:27.212 15:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:27.212 15:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:27.212 15:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:27.212 15:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:27.212 15:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:27.212 15:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:27.212 15:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:27.212 15:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:27.212 15:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:27.212 15:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:27.212 15:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:27.212 15:19:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.212 15:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:27.212 15:19:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.212 15:19:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.212 15:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:27.212 "name": "Existed_Raid", 00:11:27.212 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:27.212 "strip_size_kb": 0, 00:11:27.212 "state": "configuring", 00:11:27.212 "raid_level": "raid1", 00:11:27.212 "superblock": false, 00:11:27.212 "num_base_bdevs": 4, 00:11:27.212 "num_base_bdevs_discovered": 1, 00:11:27.212 "num_base_bdevs_operational": 4, 00:11:27.212 "base_bdevs_list": [ 00:11:27.212 { 00:11:27.212 "name": "BaseBdev1", 00:11:27.212 "uuid": "4c4eb643-1c5f-41fb-bf62-f974546ee633", 00:11:27.212 "is_configured": true, 00:11:27.212 "data_offset": 0, 00:11:27.212 "data_size": 65536 00:11:27.212 }, 00:11:27.212 { 00:11:27.212 "name": "BaseBdev2", 00:11:27.212 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:27.212 "is_configured": false, 00:11:27.212 "data_offset": 0, 00:11:27.212 "data_size": 0 00:11:27.212 }, 00:11:27.212 { 00:11:27.212 "name": "BaseBdev3", 00:11:27.212 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:27.212 "is_configured": false, 00:11:27.212 "data_offset": 0, 00:11:27.212 "data_size": 0 00:11:27.212 }, 00:11:27.212 { 00:11:27.212 "name": "BaseBdev4", 00:11:27.212 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:27.212 "is_configured": false, 00:11:27.212 "data_offset": 0, 00:11:27.212 "data_size": 0 00:11:27.212 } 00:11:27.212 ] 00:11:27.212 }' 00:11:27.212 15:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:27.212 15:19:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.779 15:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:27.779 15:19:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.779 15:19:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.779 [2024-11-20 15:19:14.004590] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:27.779 [2024-11-20 15:19:14.004666] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:27.779 15:19:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.779 15:19:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:27.779 15:19:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.779 15:19:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.779 [2024-11-20 15:19:14.012627] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:27.779 [2024-11-20 15:19:14.014851] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:27.779 [2024-11-20 15:19:14.014902] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:27.779 [2024-11-20 15:19:14.014914] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:27.779 [2024-11-20 15:19:14.014930] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:27.779 [2024-11-20 15:19:14.014939] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:27.779 [2024-11-20 15:19:14.014952] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:27.779 15:19:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.779 15:19:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:27.779 15:19:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:27.779 15:19:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:27.779 15:19:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:27.779 15:19:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:27.779 15:19:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:27.779 15:19:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:27.779 15:19:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:27.779 15:19:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:27.779 15:19:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:27.779 15:19:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:27.779 15:19:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:27.779 15:19:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:27.779 15:19:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:27.779 15:19:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.779 15:19:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.779 15:19:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.779 15:19:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:27.779 "name": "Existed_Raid", 00:11:27.779 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:27.779 "strip_size_kb": 0, 00:11:27.779 "state": "configuring", 00:11:27.779 "raid_level": "raid1", 00:11:27.779 "superblock": false, 00:11:27.779 "num_base_bdevs": 4, 00:11:27.779 "num_base_bdevs_discovered": 1, 00:11:27.779 "num_base_bdevs_operational": 4, 00:11:27.779 "base_bdevs_list": [ 00:11:27.779 { 00:11:27.779 "name": "BaseBdev1", 00:11:27.779 "uuid": "4c4eb643-1c5f-41fb-bf62-f974546ee633", 00:11:27.779 "is_configured": true, 00:11:27.779 "data_offset": 0, 00:11:27.779 "data_size": 65536 00:11:27.779 }, 00:11:27.779 { 00:11:27.779 "name": "BaseBdev2", 00:11:27.779 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:27.779 "is_configured": false, 00:11:27.779 "data_offset": 0, 00:11:27.779 "data_size": 0 00:11:27.779 }, 00:11:27.779 { 00:11:27.779 "name": "BaseBdev3", 00:11:27.779 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:27.779 "is_configured": false, 00:11:27.779 "data_offset": 0, 00:11:27.779 "data_size": 0 00:11:27.779 }, 00:11:27.779 { 00:11:27.779 "name": "BaseBdev4", 00:11:27.779 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:27.779 "is_configured": false, 00:11:27.779 "data_offset": 0, 00:11:27.779 "data_size": 0 00:11:27.779 } 00:11:27.779 ] 00:11:27.779 }' 00:11:27.779 15:19:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:27.779 15:19:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.038 15:19:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:28.038 15:19:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.038 15:19:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.038 [2024-11-20 15:19:14.462087] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:28.038 BaseBdev2 00:11:28.038 15:19:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.038 15:19:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:28.038 15:19:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:28.038 15:19:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:28.038 15:19:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:28.038 15:19:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:28.038 15:19:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:28.038 15:19:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:28.038 15:19:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.038 15:19:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.038 15:19:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.038 15:19:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:28.038 15:19:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.038 15:19:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.038 [ 00:11:28.038 { 00:11:28.038 "name": "BaseBdev2", 00:11:28.038 "aliases": [ 00:11:28.038 "0abc5135-2c06-4eef-b508-3433cc0c0fe0" 00:11:28.038 ], 00:11:28.038 "product_name": "Malloc disk", 00:11:28.038 "block_size": 512, 00:11:28.038 "num_blocks": 65536, 00:11:28.038 "uuid": "0abc5135-2c06-4eef-b508-3433cc0c0fe0", 00:11:28.038 "assigned_rate_limits": { 00:11:28.038 "rw_ios_per_sec": 0, 00:11:28.038 "rw_mbytes_per_sec": 0, 00:11:28.038 "r_mbytes_per_sec": 0, 00:11:28.038 "w_mbytes_per_sec": 0 00:11:28.038 }, 00:11:28.038 "claimed": true, 00:11:28.038 "claim_type": "exclusive_write", 00:11:28.038 "zoned": false, 00:11:28.038 "supported_io_types": { 00:11:28.038 "read": true, 00:11:28.038 "write": true, 00:11:28.038 "unmap": true, 00:11:28.038 "flush": true, 00:11:28.038 "reset": true, 00:11:28.038 "nvme_admin": false, 00:11:28.038 "nvme_io": false, 00:11:28.038 "nvme_io_md": false, 00:11:28.038 "write_zeroes": true, 00:11:28.038 "zcopy": true, 00:11:28.038 "get_zone_info": false, 00:11:28.038 "zone_management": false, 00:11:28.038 "zone_append": false, 00:11:28.038 "compare": false, 00:11:28.038 "compare_and_write": false, 00:11:28.038 "abort": true, 00:11:28.038 "seek_hole": false, 00:11:28.038 "seek_data": false, 00:11:28.038 "copy": true, 00:11:28.038 "nvme_iov_md": false 00:11:28.038 }, 00:11:28.038 "memory_domains": [ 00:11:28.038 { 00:11:28.038 "dma_device_id": "system", 00:11:28.038 "dma_device_type": 1 00:11:28.038 }, 00:11:28.038 { 00:11:28.038 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:28.038 "dma_device_type": 2 00:11:28.038 } 00:11:28.038 ], 00:11:28.038 "driver_specific": {} 00:11:28.038 } 00:11:28.038 ] 00:11:28.038 15:19:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.038 15:19:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:28.038 15:19:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:28.038 15:19:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:28.038 15:19:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:28.038 15:19:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:28.038 15:19:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:28.038 15:19:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:28.038 15:19:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:28.038 15:19:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:28.038 15:19:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:28.038 15:19:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:28.038 15:19:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:28.038 15:19:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:28.038 15:19:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:28.038 15:19:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:28.038 15:19:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.038 15:19:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.297 15:19:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.297 15:19:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:28.297 "name": "Existed_Raid", 00:11:28.297 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:28.297 "strip_size_kb": 0, 00:11:28.297 "state": "configuring", 00:11:28.297 "raid_level": "raid1", 00:11:28.297 "superblock": false, 00:11:28.297 "num_base_bdevs": 4, 00:11:28.297 "num_base_bdevs_discovered": 2, 00:11:28.297 "num_base_bdevs_operational": 4, 00:11:28.297 "base_bdevs_list": [ 00:11:28.297 { 00:11:28.297 "name": "BaseBdev1", 00:11:28.297 "uuid": "4c4eb643-1c5f-41fb-bf62-f974546ee633", 00:11:28.297 "is_configured": true, 00:11:28.297 "data_offset": 0, 00:11:28.298 "data_size": 65536 00:11:28.298 }, 00:11:28.298 { 00:11:28.298 "name": "BaseBdev2", 00:11:28.298 "uuid": "0abc5135-2c06-4eef-b508-3433cc0c0fe0", 00:11:28.298 "is_configured": true, 00:11:28.298 "data_offset": 0, 00:11:28.298 "data_size": 65536 00:11:28.298 }, 00:11:28.298 { 00:11:28.298 "name": "BaseBdev3", 00:11:28.298 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:28.298 "is_configured": false, 00:11:28.298 "data_offset": 0, 00:11:28.298 "data_size": 0 00:11:28.298 }, 00:11:28.298 { 00:11:28.298 "name": "BaseBdev4", 00:11:28.298 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:28.298 "is_configured": false, 00:11:28.298 "data_offset": 0, 00:11:28.298 "data_size": 0 00:11:28.298 } 00:11:28.298 ] 00:11:28.298 }' 00:11:28.298 15:19:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:28.298 15:19:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.556 15:19:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:28.556 15:19:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.556 15:19:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.556 [2024-11-20 15:19:14.985149] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:28.556 BaseBdev3 00:11:28.556 15:19:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.556 15:19:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:28.556 15:19:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:28.556 15:19:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:28.556 15:19:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:28.556 15:19:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:28.556 15:19:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:28.556 15:19:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:28.556 15:19:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.556 15:19:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.556 15:19:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.556 15:19:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:28.556 15:19:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.556 15:19:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.556 [ 00:11:28.556 { 00:11:28.556 "name": "BaseBdev3", 00:11:28.556 "aliases": [ 00:11:28.556 "cbf27074-97cf-47ec-9301-b43e2c4513c0" 00:11:28.556 ], 00:11:28.556 "product_name": "Malloc disk", 00:11:28.556 "block_size": 512, 00:11:28.556 "num_blocks": 65536, 00:11:28.556 "uuid": "cbf27074-97cf-47ec-9301-b43e2c4513c0", 00:11:28.556 "assigned_rate_limits": { 00:11:28.556 "rw_ios_per_sec": 0, 00:11:28.556 "rw_mbytes_per_sec": 0, 00:11:28.556 "r_mbytes_per_sec": 0, 00:11:28.556 "w_mbytes_per_sec": 0 00:11:28.557 }, 00:11:28.557 "claimed": true, 00:11:28.557 "claim_type": "exclusive_write", 00:11:28.557 "zoned": false, 00:11:28.557 "supported_io_types": { 00:11:28.557 "read": true, 00:11:28.557 "write": true, 00:11:28.557 "unmap": true, 00:11:28.557 "flush": true, 00:11:28.557 "reset": true, 00:11:28.557 "nvme_admin": false, 00:11:28.557 "nvme_io": false, 00:11:28.557 "nvme_io_md": false, 00:11:28.557 "write_zeroes": true, 00:11:28.557 "zcopy": true, 00:11:28.557 "get_zone_info": false, 00:11:28.557 "zone_management": false, 00:11:28.557 "zone_append": false, 00:11:28.557 "compare": false, 00:11:28.557 "compare_and_write": false, 00:11:28.557 "abort": true, 00:11:28.557 "seek_hole": false, 00:11:28.557 "seek_data": false, 00:11:28.557 "copy": true, 00:11:28.557 "nvme_iov_md": false 00:11:28.557 }, 00:11:28.557 "memory_domains": [ 00:11:28.557 { 00:11:28.557 "dma_device_id": "system", 00:11:28.557 "dma_device_type": 1 00:11:28.557 }, 00:11:28.557 { 00:11:28.557 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:28.557 "dma_device_type": 2 00:11:28.557 } 00:11:28.557 ], 00:11:28.557 "driver_specific": {} 00:11:28.557 } 00:11:28.557 ] 00:11:28.557 15:19:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.557 15:19:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:28.557 15:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:28.557 15:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:28.557 15:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:28.557 15:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:28.557 15:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:28.557 15:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:28.557 15:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:28.557 15:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:28.557 15:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:28.557 15:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:28.557 15:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:28.557 15:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:28.816 15:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:28.816 15:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:28.816 15:19:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.816 15:19:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.816 15:19:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.816 15:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:28.816 "name": "Existed_Raid", 00:11:28.816 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:28.816 "strip_size_kb": 0, 00:11:28.816 "state": "configuring", 00:11:28.816 "raid_level": "raid1", 00:11:28.816 "superblock": false, 00:11:28.816 "num_base_bdevs": 4, 00:11:28.816 "num_base_bdevs_discovered": 3, 00:11:28.816 "num_base_bdevs_operational": 4, 00:11:28.816 "base_bdevs_list": [ 00:11:28.816 { 00:11:28.816 "name": "BaseBdev1", 00:11:28.816 "uuid": "4c4eb643-1c5f-41fb-bf62-f974546ee633", 00:11:28.817 "is_configured": true, 00:11:28.817 "data_offset": 0, 00:11:28.817 "data_size": 65536 00:11:28.817 }, 00:11:28.817 { 00:11:28.817 "name": "BaseBdev2", 00:11:28.817 "uuid": "0abc5135-2c06-4eef-b508-3433cc0c0fe0", 00:11:28.817 "is_configured": true, 00:11:28.817 "data_offset": 0, 00:11:28.817 "data_size": 65536 00:11:28.817 }, 00:11:28.817 { 00:11:28.817 "name": "BaseBdev3", 00:11:28.817 "uuid": "cbf27074-97cf-47ec-9301-b43e2c4513c0", 00:11:28.817 "is_configured": true, 00:11:28.817 "data_offset": 0, 00:11:28.817 "data_size": 65536 00:11:28.817 }, 00:11:28.817 { 00:11:28.817 "name": "BaseBdev4", 00:11:28.817 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:28.817 "is_configured": false, 00:11:28.817 "data_offset": 0, 00:11:28.817 "data_size": 0 00:11:28.817 } 00:11:28.817 ] 00:11:28.817 }' 00:11:28.817 15:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:28.817 15:19:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.076 15:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:29.076 15:19:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.077 15:19:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.077 [2024-11-20 15:19:15.494355] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:29.077 [2024-11-20 15:19:15.494429] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:29.077 [2024-11-20 15:19:15.494450] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:11:29.077 [2024-11-20 15:19:15.494760] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:29.077 [2024-11-20 15:19:15.494957] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:29.077 [2024-11-20 15:19:15.494972] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:29.077 [2024-11-20 15:19:15.495262] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:29.077 BaseBdev4 00:11:29.077 15:19:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.077 15:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:11:29.077 15:19:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:29.077 15:19:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:29.077 15:19:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:29.077 15:19:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:29.077 15:19:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:29.077 15:19:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:29.077 15:19:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.077 15:19:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.077 15:19:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.077 15:19:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:29.077 15:19:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.077 15:19:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.077 [ 00:11:29.077 { 00:11:29.077 "name": "BaseBdev4", 00:11:29.077 "aliases": [ 00:11:29.077 "2b0aeea3-ec38-43cc-958c-859a59feeed9" 00:11:29.077 ], 00:11:29.077 "product_name": "Malloc disk", 00:11:29.077 "block_size": 512, 00:11:29.077 "num_blocks": 65536, 00:11:29.077 "uuid": "2b0aeea3-ec38-43cc-958c-859a59feeed9", 00:11:29.077 "assigned_rate_limits": { 00:11:29.077 "rw_ios_per_sec": 0, 00:11:29.077 "rw_mbytes_per_sec": 0, 00:11:29.077 "r_mbytes_per_sec": 0, 00:11:29.077 "w_mbytes_per_sec": 0 00:11:29.077 }, 00:11:29.077 "claimed": true, 00:11:29.077 "claim_type": "exclusive_write", 00:11:29.077 "zoned": false, 00:11:29.077 "supported_io_types": { 00:11:29.077 "read": true, 00:11:29.077 "write": true, 00:11:29.077 "unmap": true, 00:11:29.077 "flush": true, 00:11:29.077 "reset": true, 00:11:29.077 "nvme_admin": false, 00:11:29.077 "nvme_io": false, 00:11:29.077 "nvme_io_md": false, 00:11:29.077 "write_zeroes": true, 00:11:29.077 "zcopy": true, 00:11:29.077 "get_zone_info": false, 00:11:29.077 "zone_management": false, 00:11:29.077 "zone_append": false, 00:11:29.077 "compare": false, 00:11:29.077 "compare_and_write": false, 00:11:29.077 "abort": true, 00:11:29.077 "seek_hole": false, 00:11:29.077 "seek_data": false, 00:11:29.077 "copy": true, 00:11:29.077 "nvme_iov_md": false 00:11:29.077 }, 00:11:29.077 "memory_domains": [ 00:11:29.077 { 00:11:29.077 "dma_device_id": "system", 00:11:29.077 "dma_device_type": 1 00:11:29.077 }, 00:11:29.077 { 00:11:29.077 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:29.077 "dma_device_type": 2 00:11:29.077 } 00:11:29.077 ], 00:11:29.077 "driver_specific": {} 00:11:29.077 } 00:11:29.077 ] 00:11:29.077 15:19:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.077 15:19:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:29.077 15:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:29.077 15:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:29.077 15:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:11:29.077 15:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:29.077 15:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:29.077 15:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:29.077 15:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:29.077 15:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:29.077 15:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:29.077 15:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:29.077 15:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:29.077 15:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:29.077 15:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:29.077 15:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:29.077 15:19:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.077 15:19:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.336 15:19:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.336 15:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:29.336 "name": "Existed_Raid", 00:11:29.336 "uuid": "57156ffd-5d02-4396-85d3-5abbb26b54d7", 00:11:29.336 "strip_size_kb": 0, 00:11:29.336 "state": "online", 00:11:29.336 "raid_level": "raid1", 00:11:29.336 "superblock": false, 00:11:29.336 "num_base_bdevs": 4, 00:11:29.336 "num_base_bdevs_discovered": 4, 00:11:29.336 "num_base_bdevs_operational": 4, 00:11:29.336 "base_bdevs_list": [ 00:11:29.336 { 00:11:29.336 "name": "BaseBdev1", 00:11:29.336 "uuid": "4c4eb643-1c5f-41fb-bf62-f974546ee633", 00:11:29.336 "is_configured": true, 00:11:29.336 "data_offset": 0, 00:11:29.336 "data_size": 65536 00:11:29.336 }, 00:11:29.336 { 00:11:29.336 "name": "BaseBdev2", 00:11:29.336 "uuid": "0abc5135-2c06-4eef-b508-3433cc0c0fe0", 00:11:29.336 "is_configured": true, 00:11:29.336 "data_offset": 0, 00:11:29.336 "data_size": 65536 00:11:29.336 }, 00:11:29.336 { 00:11:29.336 "name": "BaseBdev3", 00:11:29.336 "uuid": "cbf27074-97cf-47ec-9301-b43e2c4513c0", 00:11:29.336 "is_configured": true, 00:11:29.336 "data_offset": 0, 00:11:29.336 "data_size": 65536 00:11:29.336 }, 00:11:29.336 { 00:11:29.336 "name": "BaseBdev4", 00:11:29.336 "uuid": "2b0aeea3-ec38-43cc-958c-859a59feeed9", 00:11:29.336 "is_configured": true, 00:11:29.336 "data_offset": 0, 00:11:29.336 "data_size": 65536 00:11:29.336 } 00:11:29.336 ] 00:11:29.336 }' 00:11:29.336 15:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:29.336 15:19:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.595 15:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:29.595 15:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:29.595 15:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:29.595 15:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:29.595 15:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:29.595 15:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:29.595 15:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:29.595 15:19:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.595 15:19:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.595 15:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:29.595 [2024-11-20 15:19:15.962134] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:29.595 15:19:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.595 15:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:29.595 "name": "Existed_Raid", 00:11:29.595 "aliases": [ 00:11:29.595 "57156ffd-5d02-4396-85d3-5abbb26b54d7" 00:11:29.595 ], 00:11:29.595 "product_name": "Raid Volume", 00:11:29.595 "block_size": 512, 00:11:29.595 "num_blocks": 65536, 00:11:29.595 "uuid": "57156ffd-5d02-4396-85d3-5abbb26b54d7", 00:11:29.595 "assigned_rate_limits": { 00:11:29.595 "rw_ios_per_sec": 0, 00:11:29.595 "rw_mbytes_per_sec": 0, 00:11:29.595 "r_mbytes_per_sec": 0, 00:11:29.595 "w_mbytes_per_sec": 0 00:11:29.595 }, 00:11:29.595 "claimed": false, 00:11:29.595 "zoned": false, 00:11:29.595 "supported_io_types": { 00:11:29.595 "read": true, 00:11:29.595 "write": true, 00:11:29.595 "unmap": false, 00:11:29.595 "flush": false, 00:11:29.595 "reset": true, 00:11:29.595 "nvme_admin": false, 00:11:29.595 "nvme_io": false, 00:11:29.595 "nvme_io_md": false, 00:11:29.595 "write_zeroes": true, 00:11:29.595 "zcopy": false, 00:11:29.595 "get_zone_info": false, 00:11:29.595 "zone_management": false, 00:11:29.595 "zone_append": false, 00:11:29.595 "compare": false, 00:11:29.595 "compare_and_write": false, 00:11:29.595 "abort": false, 00:11:29.595 "seek_hole": false, 00:11:29.595 "seek_data": false, 00:11:29.595 "copy": false, 00:11:29.595 "nvme_iov_md": false 00:11:29.595 }, 00:11:29.595 "memory_domains": [ 00:11:29.595 { 00:11:29.595 "dma_device_id": "system", 00:11:29.595 "dma_device_type": 1 00:11:29.595 }, 00:11:29.595 { 00:11:29.595 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:29.595 "dma_device_type": 2 00:11:29.595 }, 00:11:29.595 { 00:11:29.595 "dma_device_id": "system", 00:11:29.595 "dma_device_type": 1 00:11:29.595 }, 00:11:29.595 { 00:11:29.595 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:29.595 "dma_device_type": 2 00:11:29.595 }, 00:11:29.595 { 00:11:29.595 "dma_device_id": "system", 00:11:29.595 "dma_device_type": 1 00:11:29.595 }, 00:11:29.595 { 00:11:29.595 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:29.595 "dma_device_type": 2 00:11:29.595 }, 00:11:29.595 { 00:11:29.595 "dma_device_id": "system", 00:11:29.595 "dma_device_type": 1 00:11:29.595 }, 00:11:29.595 { 00:11:29.595 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:29.595 "dma_device_type": 2 00:11:29.595 } 00:11:29.595 ], 00:11:29.595 "driver_specific": { 00:11:29.595 "raid": { 00:11:29.595 "uuid": "57156ffd-5d02-4396-85d3-5abbb26b54d7", 00:11:29.595 "strip_size_kb": 0, 00:11:29.595 "state": "online", 00:11:29.595 "raid_level": "raid1", 00:11:29.595 "superblock": false, 00:11:29.595 "num_base_bdevs": 4, 00:11:29.595 "num_base_bdevs_discovered": 4, 00:11:29.595 "num_base_bdevs_operational": 4, 00:11:29.595 "base_bdevs_list": [ 00:11:29.595 { 00:11:29.595 "name": "BaseBdev1", 00:11:29.595 "uuid": "4c4eb643-1c5f-41fb-bf62-f974546ee633", 00:11:29.595 "is_configured": true, 00:11:29.595 "data_offset": 0, 00:11:29.595 "data_size": 65536 00:11:29.595 }, 00:11:29.595 { 00:11:29.595 "name": "BaseBdev2", 00:11:29.595 "uuid": "0abc5135-2c06-4eef-b508-3433cc0c0fe0", 00:11:29.595 "is_configured": true, 00:11:29.595 "data_offset": 0, 00:11:29.595 "data_size": 65536 00:11:29.595 }, 00:11:29.595 { 00:11:29.595 "name": "BaseBdev3", 00:11:29.595 "uuid": "cbf27074-97cf-47ec-9301-b43e2c4513c0", 00:11:29.595 "is_configured": true, 00:11:29.595 "data_offset": 0, 00:11:29.595 "data_size": 65536 00:11:29.595 }, 00:11:29.595 { 00:11:29.595 "name": "BaseBdev4", 00:11:29.595 "uuid": "2b0aeea3-ec38-43cc-958c-859a59feeed9", 00:11:29.595 "is_configured": true, 00:11:29.595 "data_offset": 0, 00:11:29.595 "data_size": 65536 00:11:29.595 } 00:11:29.595 ] 00:11:29.595 } 00:11:29.595 } 00:11:29.595 }' 00:11:29.595 15:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:29.595 15:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:29.595 BaseBdev2 00:11:29.595 BaseBdev3 00:11:29.595 BaseBdev4' 00:11:29.595 15:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:29.595 15:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:29.595 15:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:29.854 15:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:29.854 15:19:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.854 15:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:29.854 15:19:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.854 15:19:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.854 15:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:29.854 15:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:29.854 15:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:29.854 15:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:29.854 15:19:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.854 15:19:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.854 15:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:29.854 15:19:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.854 15:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:29.854 15:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:29.854 15:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:29.854 15:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:29.854 15:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:29.854 15:19:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.854 15:19:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.854 15:19:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.854 15:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:29.854 15:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:29.854 15:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:29.854 15:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:29.854 15:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:29.854 15:19:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.854 15:19:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.854 15:19:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.854 15:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:29.854 15:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:29.854 15:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:29.854 15:19:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.854 15:19:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.854 [2024-11-20 15:19:16.253473] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:30.113 15:19:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.113 15:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:30.113 15:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:11:30.113 15:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:30.113 15:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:11:30.113 15:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:11:30.113 15:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:11:30.113 15:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:30.114 15:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:30.114 15:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:30.114 15:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:30.114 15:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:30.114 15:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:30.114 15:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:30.114 15:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:30.114 15:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:30.114 15:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:30.114 15:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:30.114 15:19:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.114 15:19:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.114 15:19:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.114 15:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:30.114 "name": "Existed_Raid", 00:11:30.114 "uuid": "57156ffd-5d02-4396-85d3-5abbb26b54d7", 00:11:30.114 "strip_size_kb": 0, 00:11:30.114 "state": "online", 00:11:30.114 "raid_level": "raid1", 00:11:30.114 "superblock": false, 00:11:30.114 "num_base_bdevs": 4, 00:11:30.114 "num_base_bdevs_discovered": 3, 00:11:30.114 "num_base_bdevs_operational": 3, 00:11:30.114 "base_bdevs_list": [ 00:11:30.114 { 00:11:30.114 "name": null, 00:11:30.114 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:30.114 "is_configured": false, 00:11:30.114 "data_offset": 0, 00:11:30.114 "data_size": 65536 00:11:30.114 }, 00:11:30.114 { 00:11:30.114 "name": "BaseBdev2", 00:11:30.114 "uuid": "0abc5135-2c06-4eef-b508-3433cc0c0fe0", 00:11:30.114 "is_configured": true, 00:11:30.114 "data_offset": 0, 00:11:30.114 "data_size": 65536 00:11:30.114 }, 00:11:30.114 { 00:11:30.114 "name": "BaseBdev3", 00:11:30.114 "uuid": "cbf27074-97cf-47ec-9301-b43e2c4513c0", 00:11:30.114 "is_configured": true, 00:11:30.114 "data_offset": 0, 00:11:30.114 "data_size": 65536 00:11:30.114 }, 00:11:30.114 { 00:11:30.114 "name": "BaseBdev4", 00:11:30.114 "uuid": "2b0aeea3-ec38-43cc-958c-859a59feeed9", 00:11:30.114 "is_configured": true, 00:11:30.114 "data_offset": 0, 00:11:30.114 "data_size": 65536 00:11:30.114 } 00:11:30.114 ] 00:11:30.114 }' 00:11:30.114 15:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:30.114 15:19:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.373 15:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:30.373 15:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:30.373 15:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:30.373 15:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:30.373 15:19:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.373 15:19:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.373 15:19:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.373 15:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:30.373 15:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:30.373 15:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:30.373 15:19:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.373 15:19:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.373 [2024-11-20 15:19:16.809514] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:30.632 15:19:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.632 15:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:30.632 15:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:30.632 15:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:30.632 15:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:30.632 15:19:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.632 15:19:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.632 15:19:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.632 15:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:30.632 15:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:30.632 15:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:30.632 15:19:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.632 15:19:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.632 [2024-11-20 15:19:16.955417] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:30.632 15:19:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.632 15:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:30.632 15:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:30.632 15:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:30.632 15:19:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.632 15:19:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.632 15:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:30.632 15:19:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.632 15:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:30.632 15:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:30.632 15:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:11:30.632 15:19:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.632 15:19:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.632 [2024-11-20 15:19:17.106957] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:11:30.632 [2024-11-20 15:19:17.107055] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:30.891 [2024-11-20 15:19:17.203400] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:30.891 [2024-11-20 15:19:17.203459] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:30.891 [2024-11-20 15:19:17.203474] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:30.891 15:19:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.891 15:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:30.891 15:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:30.891 15:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:30.891 15:19:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.891 15:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:30.891 15:19:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.891 15:19:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.891 15:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:30.891 15:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:30.891 15:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:11:30.891 15:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:30.891 15:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:30.891 15:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:30.891 15:19:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.891 15:19:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.891 BaseBdev2 00:11:30.891 15:19:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.891 15:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:30.891 15:19:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:30.891 15:19:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:30.891 15:19:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:30.891 15:19:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:30.891 15:19:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:30.891 15:19:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:30.891 15:19:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.891 15:19:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.891 15:19:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.891 15:19:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:30.891 15:19:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.891 15:19:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.891 [ 00:11:30.891 { 00:11:30.891 "name": "BaseBdev2", 00:11:30.891 "aliases": [ 00:11:30.891 "9822ad87-f5d6-43da-b058-f15d4b86b645" 00:11:30.891 ], 00:11:30.891 "product_name": "Malloc disk", 00:11:30.891 "block_size": 512, 00:11:30.891 "num_blocks": 65536, 00:11:30.891 "uuid": "9822ad87-f5d6-43da-b058-f15d4b86b645", 00:11:30.891 "assigned_rate_limits": { 00:11:30.891 "rw_ios_per_sec": 0, 00:11:30.891 "rw_mbytes_per_sec": 0, 00:11:30.891 "r_mbytes_per_sec": 0, 00:11:30.891 "w_mbytes_per_sec": 0 00:11:30.891 }, 00:11:30.891 "claimed": false, 00:11:30.891 "zoned": false, 00:11:30.891 "supported_io_types": { 00:11:30.891 "read": true, 00:11:30.891 "write": true, 00:11:30.891 "unmap": true, 00:11:30.891 "flush": true, 00:11:30.891 "reset": true, 00:11:30.891 "nvme_admin": false, 00:11:30.891 "nvme_io": false, 00:11:30.891 "nvme_io_md": false, 00:11:30.891 "write_zeroes": true, 00:11:30.891 "zcopy": true, 00:11:30.891 "get_zone_info": false, 00:11:30.891 "zone_management": false, 00:11:30.891 "zone_append": false, 00:11:30.891 "compare": false, 00:11:30.891 "compare_and_write": false, 00:11:30.891 "abort": true, 00:11:30.891 "seek_hole": false, 00:11:30.891 "seek_data": false, 00:11:30.891 "copy": true, 00:11:30.891 "nvme_iov_md": false 00:11:30.891 }, 00:11:30.891 "memory_domains": [ 00:11:30.891 { 00:11:30.891 "dma_device_id": "system", 00:11:30.891 "dma_device_type": 1 00:11:30.891 }, 00:11:30.891 { 00:11:30.891 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:30.891 "dma_device_type": 2 00:11:30.891 } 00:11:30.891 ], 00:11:30.891 "driver_specific": {} 00:11:30.891 } 00:11:30.892 ] 00:11:30.892 15:19:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.892 15:19:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:30.892 15:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:30.892 15:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:30.892 15:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:30.892 15:19:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.892 15:19:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.151 BaseBdev3 00:11:31.151 15:19:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.151 15:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:31.151 15:19:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:31.151 15:19:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:31.151 15:19:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:31.151 15:19:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:31.151 15:19:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:31.151 15:19:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:31.151 15:19:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.151 15:19:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.151 15:19:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.151 15:19:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:31.151 15:19:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.151 15:19:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.151 [ 00:11:31.151 { 00:11:31.151 "name": "BaseBdev3", 00:11:31.151 "aliases": [ 00:11:31.151 "d5e88233-0c84-4498-b049-26d7c1aba717" 00:11:31.151 ], 00:11:31.151 "product_name": "Malloc disk", 00:11:31.151 "block_size": 512, 00:11:31.151 "num_blocks": 65536, 00:11:31.151 "uuid": "d5e88233-0c84-4498-b049-26d7c1aba717", 00:11:31.151 "assigned_rate_limits": { 00:11:31.151 "rw_ios_per_sec": 0, 00:11:31.151 "rw_mbytes_per_sec": 0, 00:11:31.151 "r_mbytes_per_sec": 0, 00:11:31.151 "w_mbytes_per_sec": 0 00:11:31.151 }, 00:11:31.151 "claimed": false, 00:11:31.151 "zoned": false, 00:11:31.151 "supported_io_types": { 00:11:31.151 "read": true, 00:11:31.151 "write": true, 00:11:31.151 "unmap": true, 00:11:31.151 "flush": true, 00:11:31.151 "reset": true, 00:11:31.151 "nvme_admin": false, 00:11:31.151 "nvme_io": false, 00:11:31.151 "nvme_io_md": false, 00:11:31.151 "write_zeroes": true, 00:11:31.151 "zcopy": true, 00:11:31.151 "get_zone_info": false, 00:11:31.151 "zone_management": false, 00:11:31.151 "zone_append": false, 00:11:31.151 "compare": false, 00:11:31.151 "compare_and_write": false, 00:11:31.151 "abort": true, 00:11:31.151 "seek_hole": false, 00:11:31.151 "seek_data": false, 00:11:31.151 "copy": true, 00:11:31.151 "nvme_iov_md": false 00:11:31.151 }, 00:11:31.151 "memory_domains": [ 00:11:31.151 { 00:11:31.151 "dma_device_id": "system", 00:11:31.151 "dma_device_type": 1 00:11:31.151 }, 00:11:31.151 { 00:11:31.151 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:31.151 "dma_device_type": 2 00:11:31.151 } 00:11:31.151 ], 00:11:31.151 "driver_specific": {} 00:11:31.151 } 00:11:31.151 ] 00:11:31.151 15:19:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.151 15:19:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:31.151 15:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:31.151 15:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:31.151 15:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:31.151 15:19:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.151 15:19:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.151 BaseBdev4 00:11:31.151 15:19:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.151 15:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:11:31.151 15:19:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:31.151 15:19:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:31.151 15:19:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:31.151 15:19:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:31.151 15:19:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:31.151 15:19:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:31.151 15:19:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.151 15:19:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.151 15:19:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.151 15:19:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:31.151 15:19:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.151 15:19:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.151 [ 00:11:31.151 { 00:11:31.151 "name": "BaseBdev4", 00:11:31.151 "aliases": [ 00:11:31.151 "485226fe-6799-40e3-bd38-87dafa42a655" 00:11:31.151 ], 00:11:31.151 "product_name": "Malloc disk", 00:11:31.151 "block_size": 512, 00:11:31.151 "num_blocks": 65536, 00:11:31.151 "uuid": "485226fe-6799-40e3-bd38-87dafa42a655", 00:11:31.151 "assigned_rate_limits": { 00:11:31.151 "rw_ios_per_sec": 0, 00:11:31.151 "rw_mbytes_per_sec": 0, 00:11:31.151 "r_mbytes_per_sec": 0, 00:11:31.151 "w_mbytes_per_sec": 0 00:11:31.151 }, 00:11:31.151 "claimed": false, 00:11:31.151 "zoned": false, 00:11:31.151 "supported_io_types": { 00:11:31.151 "read": true, 00:11:31.151 "write": true, 00:11:31.151 "unmap": true, 00:11:31.151 "flush": true, 00:11:31.151 "reset": true, 00:11:31.151 "nvme_admin": false, 00:11:31.151 "nvme_io": false, 00:11:31.151 "nvme_io_md": false, 00:11:31.151 "write_zeroes": true, 00:11:31.151 "zcopy": true, 00:11:31.151 "get_zone_info": false, 00:11:31.151 "zone_management": false, 00:11:31.151 "zone_append": false, 00:11:31.151 "compare": false, 00:11:31.151 "compare_and_write": false, 00:11:31.151 "abort": true, 00:11:31.151 "seek_hole": false, 00:11:31.151 "seek_data": false, 00:11:31.151 "copy": true, 00:11:31.151 "nvme_iov_md": false 00:11:31.151 }, 00:11:31.151 "memory_domains": [ 00:11:31.151 { 00:11:31.151 "dma_device_id": "system", 00:11:31.151 "dma_device_type": 1 00:11:31.151 }, 00:11:31.151 { 00:11:31.151 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:31.151 "dma_device_type": 2 00:11:31.151 } 00:11:31.151 ], 00:11:31.151 "driver_specific": {} 00:11:31.151 } 00:11:31.151 ] 00:11:31.151 15:19:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.151 15:19:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:31.151 15:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:31.152 15:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:31.152 15:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:31.152 15:19:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.152 15:19:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.152 [2024-11-20 15:19:17.532244] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:31.152 [2024-11-20 15:19:17.532308] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:31.152 [2024-11-20 15:19:17.532335] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:31.152 [2024-11-20 15:19:17.534781] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:31.152 [2024-11-20 15:19:17.534851] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:31.152 15:19:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.152 15:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:31.152 15:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:31.152 15:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:31.152 15:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:31.152 15:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:31.152 15:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:31.152 15:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:31.152 15:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:31.152 15:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:31.152 15:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:31.152 15:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:31.152 15:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:31.152 15:19:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.152 15:19:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.152 15:19:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.152 15:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:31.152 "name": "Existed_Raid", 00:11:31.152 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:31.152 "strip_size_kb": 0, 00:11:31.152 "state": "configuring", 00:11:31.152 "raid_level": "raid1", 00:11:31.152 "superblock": false, 00:11:31.152 "num_base_bdevs": 4, 00:11:31.152 "num_base_bdevs_discovered": 3, 00:11:31.152 "num_base_bdevs_operational": 4, 00:11:31.152 "base_bdevs_list": [ 00:11:31.152 { 00:11:31.152 "name": "BaseBdev1", 00:11:31.152 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:31.152 "is_configured": false, 00:11:31.152 "data_offset": 0, 00:11:31.152 "data_size": 0 00:11:31.152 }, 00:11:31.152 { 00:11:31.152 "name": "BaseBdev2", 00:11:31.152 "uuid": "9822ad87-f5d6-43da-b058-f15d4b86b645", 00:11:31.152 "is_configured": true, 00:11:31.152 "data_offset": 0, 00:11:31.152 "data_size": 65536 00:11:31.152 }, 00:11:31.152 { 00:11:31.152 "name": "BaseBdev3", 00:11:31.152 "uuid": "d5e88233-0c84-4498-b049-26d7c1aba717", 00:11:31.152 "is_configured": true, 00:11:31.152 "data_offset": 0, 00:11:31.152 "data_size": 65536 00:11:31.152 }, 00:11:31.152 { 00:11:31.152 "name": "BaseBdev4", 00:11:31.152 "uuid": "485226fe-6799-40e3-bd38-87dafa42a655", 00:11:31.152 "is_configured": true, 00:11:31.152 "data_offset": 0, 00:11:31.152 "data_size": 65536 00:11:31.152 } 00:11:31.152 ] 00:11:31.152 }' 00:11:31.152 15:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:31.152 15:19:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.753 15:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:31.753 15:19:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.753 15:19:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.753 [2024-11-20 15:19:17.931826] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:31.753 15:19:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.753 15:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:31.753 15:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:31.753 15:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:31.753 15:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:31.753 15:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:31.753 15:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:31.753 15:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:31.753 15:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:31.753 15:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:31.753 15:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:31.753 15:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:31.753 15:19:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.753 15:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:31.753 15:19:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.753 15:19:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.753 15:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:31.753 "name": "Existed_Raid", 00:11:31.753 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:31.753 "strip_size_kb": 0, 00:11:31.753 "state": "configuring", 00:11:31.753 "raid_level": "raid1", 00:11:31.753 "superblock": false, 00:11:31.753 "num_base_bdevs": 4, 00:11:31.753 "num_base_bdevs_discovered": 2, 00:11:31.753 "num_base_bdevs_operational": 4, 00:11:31.753 "base_bdevs_list": [ 00:11:31.753 { 00:11:31.753 "name": "BaseBdev1", 00:11:31.753 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:31.753 "is_configured": false, 00:11:31.753 "data_offset": 0, 00:11:31.753 "data_size": 0 00:11:31.753 }, 00:11:31.753 { 00:11:31.753 "name": null, 00:11:31.753 "uuid": "9822ad87-f5d6-43da-b058-f15d4b86b645", 00:11:31.753 "is_configured": false, 00:11:31.753 "data_offset": 0, 00:11:31.753 "data_size": 65536 00:11:31.753 }, 00:11:31.753 { 00:11:31.753 "name": "BaseBdev3", 00:11:31.753 "uuid": "d5e88233-0c84-4498-b049-26d7c1aba717", 00:11:31.753 "is_configured": true, 00:11:31.753 "data_offset": 0, 00:11:31.753 "data_size": 65536 00:11:31.753 }, 00:11:31.753 { 00:11:31.753 "name": "BaseBdev4", 00:11:31.753 "uuid": "485226fe-6799-40e3-bd38-87dafa42a655", 00:11:31.753 "is_configured": true, 00:11:31.753 "data_offset": 0, 00:11:31.753 "data_size": 65536 00:11:31.753 } 00:11:31.753 ] 00:11:31.753 }' 00:11:31.753 15:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:31.754 15:19:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.013 15:19:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:32.013 15:19:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:32.013 15:19:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.013 15:19:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.013 15:19:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.013 15:19:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:32.013 15:19:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:32.013 15:19:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.013 15:19:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.013 [2024-11-20 15:19:18.445038] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:32.013 BaseBdev1 00:11:32.013 15:19:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.013 15:19:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:32.013 15:19:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:32.013 15:19:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:32.013 15:19:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:32.013 15:19:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:32.013 15:19:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:32.013 15:19:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:32.013 15:19:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.013 15:19:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.013 15:19:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.013 15:19:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:32.013 15:19:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.013 15:19:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.013 [ 00:11:32.013 { 00:11:32.013 "name": "BaseBdev1", 00:11:32.013 "aliases": [ 00:11:32.013 "8907f8e1-b3c4-414c-a5f6-a8c8e2acc61b" 00:11:32.013 ], 00:11:32.013 "product_name": "Malloc disk", 00:11:32.013 "block_size": 512, 00:11:32.013 "num_blocks": 65536, 00:11:32.013 "uuid": "8907f8e1-b3c4-414c-a5f6-a8c8e2acc61b", 00:11:32.013 "assigned_rate_limits": { 00:11:32.013 "rw_ios_per_sec": 0, 00:11:32.013 "rw_mbytes_per_sec": 0, 00:11:32.013 "r_mbytes_per_sec": 0, 00:11:32.013 "w_mbytes_per_sec": 0 00:11:32.013 }, 00:11:32.013 "claimed": true, 00:11:32.013 "claim_type": "exclusive_write", 00:11:32.013 "zoned": false, 00:11:32.013 "supported_io_types": { 00:11:32.013 "read": true, 00:11:32.013 "write": true, 00:11:32.013 "unmap": true, 00:11:32.013 "flush": true, 00:11:32.013 "reset": true, 00:11:32.013 "nvme_admin": false, 00:11:32.013 "nvme_io": false, 00:11:32.013 "nvme_io_md": false, 00:11:32.013 "write_zeroes": true, 00:11:32.013 "zcopy": true, 00:11:32.013 "get_zone_info": false, 00:11:32.013 "zone_management": false, 00:11:32.013 "zone_append": false, 00:11:32.013 "compare": false, 00:11:32.013 "compare_and_write": false, 00:11:32.013 "abort": true, 00:11:32.013 "seek_hole": false, 00:11:32.013 "seek_data": false, 00:11:32.013 "copy": true, 00:11:32.013 "nvme_iov_md": false 00:11:32.013 }, 00:11:32.013 "memory_domains": [ 00:11:32.013 { 00:11:32.013 "dma_device_id": "system", 00:11:32.013 "dma_device_type": 1 00:11:32.013 }, 00:11:32.013 { 00:11:32.013 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:32.013 "dma_device_type": 2 00:11:32.013 } 00:11:32.013 ], 00:11:32.013 "driver_specific": {} 00:11:32.013 } 00:11:32.013 ] 00:11:32.013 15:19:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.013 15:19:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:32.013 15:19:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:32.013 15:19:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:32.013 15:19:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:32.013 15:19:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:32.013 15:19:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:32.013 15:19:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:32.013 15:19:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:32.013 15:19:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:32.013 15:19:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:32.013 15:19:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:32.272 15:19:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:32.272 15:19:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.272 15:19:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:32.272 15:19:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.273 15:19:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.273 15:19:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:32.273 "name": "Existed_Raid", 00:11:32.273 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:32.273 "strip_size_kb": 0, 00:11:32.273 "state": "configuring", 00:11:32.273 "raid_level": "raid1", 00:11:32.273 "superblock": false, 00:11:32.273 "num_base_bdevs": 4, 00:11:32.273 "num_base_bdevs_discovered": 3, 00:11:32.273 "num_base_bdevs_operational": 4, 00:11:32.273 "base_bdevs_list": [ 00:11:32.273 { 00:11:32.273 "name": "BaseBdev1", 00:11:32.273 "uuid": "8907f8e1-b3c4-414c-a5f6-a8c8e2acc61b", 00:11:32.273 "is_configured": true, 00:11:32.273 "data_offset": 0, 00:11:32.273 "data_size": 65536 00:11:32.273 }, 00:11:32.273 { 00:11:32.273 "name": null, 00:11:32.273 "uuid": "9822ad87-f5d6-43da-b058-f15d4b86b645", 00:11:32.273 "is_configured": false, 00:11:32.273 "data_offset": 0, 00:11:32.273 "data_size": 65536 00:11:32.273 }, 00:11:32.273 { 00:11:32.273 "name": "BaseBdev3", 00:11:32.273 "uuid": "d5e88233-0c84-4498-b049-26d7c1aba717", 00:11:32.273 "is_configured": true, 00:11:32.273 "data_offset": 0, 00:11:32.273 "data_size": 65536 00:11:32.273 }, 00:11:32.273 { 00:11:32.273 "name": "BaseBdev4", 00:11:32.273 "uuid": "485226fe-6799-40e3-bd38-87dafa42a655", 00:11:32.273 "is_configured": true, 00:11:32.273 "data_offset": 0, 00:11:32.273 "data_size": 65536 00:11:32.273 } 00:11:32.273 ] 00:11:32.273 }' 00:11:32.273 15:19:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:32.273 15:19:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.532 15:19:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:32.532 15:19:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.532 15:19:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:32.532 15:19:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.532 15:19:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.532 15:19:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:32.532 15:19:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:32.532 15:19:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.532 15:19:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.532 [2024-11-20 15:19:18.944475] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:32.532 15:19:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.532 15:19:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:32.532 15:19:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:32.532 15:19:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:32.532 15:19:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:32.532 15:19:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:32.532 15:19:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:32.532 15:19:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:32.532 15:19:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:32.532 15:19:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:32.532 15:19:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:32.532 15:19:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:32.532 15:19:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:32.532 15:19:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.532 15:19:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.532 15:19:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.532 15:19:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:32.532 "name": "Existed_Raid", 00:11:32.532 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:32.532 "strip_size_kb": 0, 00:11:32.532 "state": "configuring", 00:11:32.532 "raid_level": "raid1", 00:11:32.532 "superblock": false, 00:11:32.532 "num_base_bdevs": 4, 00:11:32.532 "num_base_bdevs_discovered": 2, 00:11:32.532 "num_base_bdevs_operational": 4, 00:11:32.532 "base_bdevs_list": [ 00:11:32.532 { 00:11:32.532 "name": "BaseBdev1", 00:11:32.532 "uuid": "8907f8e1-b3c4-414c-a5f6-a8c8e2acc61b", 00:11:32.532 "is_configured": true, 00:11:32.532 "data_offset": 0, 00:11:32.532 "data_size": 65536 00:11:32.532 }, 00:11:32.532 { 00:11:32.532 "name": null, 00:11:32.532 "uuid": "9822ad87-f5d6-43da-b058-f15d4b86b645", 00:11:32.532 "is_configured": false, 00:11:32.532 "data_offset": 0, 00:11:32.532 "data_size": 65536 00:11:32.532 }, 00:11:32.532 { 00:11:32.532 "name": null, 00:11:32.532 "uuid": "d5e88233-0c84-4498-b049-26d7c1aba717", 00:11:32.532 "is_configured": false, 00:11:32.532 "data_offset": 0, 00:11:32.532 "data_size": 65536 00:11:32.532 }, 00:11:32.532 { 00:11:32.532 "name": "BaseBdev4", 00:11:32.532 "uuid": "485226fe-6799-40e3-bd38-87dafa42a655", 00:11:32.532 "is_configured": true, 00:11:32.532 "data_offset": 0, 00:11:32.532 "data_size": 65536 00:11:32.532 } 00:11:32.532 ] 00:11:32.532 }' 00:11:32.532 15:19:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:32.532 15:19:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.104 15:19:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:33.104 15:19:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:33.104 15:19:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.104 15:19:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.104 15:19:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.104 15:19:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:33.104 15:19:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:33.104 15:19:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.104 15:19:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.104 [2024-11-20 15:19:19.359844] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:33.105 15:19:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.105 15:19:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:33.105 15:19:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:33.105 15:19:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:33.105 15:19:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:33.105 15:19:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:33.105 15:19:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:33.105 15:19:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:33.105 15:19:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:33.105 15:19:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:33.105 15:19:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:33.105 15:19:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:33.105 15:19:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:33.105 15:19:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.105 15:19:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.105 15:19:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.105 15:19:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:33.105 "name": "Existed_Raid", 00:11:33.105 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:33.105 "strip_size_kb": 0, 00:11:33.105 "state": "configuring", 00:11:33.105 "raid_level": "raid1", 00:11:33.105 "superblock": false, 00:11:33.105 "num_base_bdevs": 4, 00:11:33.105 "num_base_bdevs_discovered": 3, 00:11:33.105 "num_base_bdevs_operational": 4, 00:11:33.105 "base_bdevs_list": [ 00:11:33.105 { 00:11:33.105 "name": "BaseBdev1", 00:11:33.105 "uuid": "8907f8e1-b3c4-414c-a5f6-a8c8e2acc61b", 00:11:33.105 "is_configured": true, 00:11:33.105 "data_offset": 0, 00:11:33.105 "data_size": 65536 00:11:33.105 }, 00:11:33.105 { 00:11:33.105 "name": null, 00:11:33.105 "uuid": "9822ad87-f5d6-43da-b058-f15d4b86b645", 00:11:33.105 "is_configured": false, 00:11:33.105 "data_offset": 0, 00:11:33.105 "data_size": 65536 00:11:33.105 }, 00:11:33.105 { 00:11:33.105 "name": "BaseBdev3", 00:11:33.105 "uuid": "d5e88233-0c84-4498-b049-26d7c1aba717", 00:11:33.105 "is_configured": true, 00:11:33.105 "data_offset": 0, 00:11:33.105 "data_size": 65536 00:11:33.105 }, 00:11:33.105 { 00:11:33.105 "name": "BaseBdev4", 00:11:33.105 "uuid": "485226fe-6799-40e3-bd38-87dafa42a655", 00:11:33.105 "is_configured": true, 00:11:33.105 "data_offset": 0, 00:11:33.105 "data_size": 65536 00:11:33.105 } 00:11:33.105 ] 00:11:33.105 }' 00:11:33.105 15:19:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:33.105 15:19:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.364 15:19:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:33.364 15:19:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:33.364 15:19:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.364 15:19:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.364 15:19:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.364 15:19:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:33.364 15:19:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:33.364 15:19:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.364 15:19:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.364 [2024-11-20 15:19:19.807422] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:33.624 15:19:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.624 15:19:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:33.624 15:19:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:33.624 15:19:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:33.624 15:19:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:33.624 15:19:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:33.624 15:19:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:33.624 15:19:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:33.624 15:19:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:33.624 15:19:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:33.624 15:19:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:33.624 15:19:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:33.624 15:19:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:33.624 15:19:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.624 15:19:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.624 15:19:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.624 15:19:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:33.624 "name": "Existed_Raid", 00:11:33.624 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:33.624 "strip_size_kb": 0, 00:11:33.624 "state": "configuring", 00:11:33.624 "raid_level": "raid1", 00:11:33.624 "superblock": false, 00:11:33.624 "num_base_bdevs": 4, 00:11:33.624 "num_base_bdevs_discovered": 2, 00:11:33.624 "num_base_bdevs_operational": 4, 00:11:33.624 "base_bdevs_list": [ 00:11:33.624 { 00:11:33.624 "name": null, 00:11:33.624 "uuid": "8907f8e1-b3c4-414c-a5f6-a8c8e2acc61b", 00:11:33.624 "is_configured": false, 00:11:33.624 "data_offset": 0, 00:11:33.624 "data_size": 65536 00:11:33.624 }, 00:11:33.624 { 00:11:33.624 "name": null, 00:11:33.624 "uuid": "9822ad87-f5d6-43da-b058-f15d4b86b645", 00:11:33.624 "is_configured": false, 00:11:33.624 "data_offset": 0, 00:11:33.624 "data_size": 65536 00:11:33.624 }, 00:11:33.624 { 00:11:33.624 "name": "BaseBdev3", 00:11:33.624 "uuid": "d5e88233-0c84-4498-b049-26d7c1aba717", 00:11:33.624 "is_configured": true, 00:11:33.624 "data_offset": 0, 00:11:33.624 "data_size": 65536 00:11:33.624 }, 00:11:33.624 { 00:11:33.624 "name": "BaseBdev4", 00:11:33.624 "uuid": "485226fe-6799-40e3-bd38-87dafa42a655", 00:11:33.624 "is_configured": true, 00:11:33.624 "data_offset": 0, 00:11:33.624 "data_size": 65536 00:11:33.624 } 00:11:33.624 ] 00:11:33.624 }' 00:11:33.624 15:19:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:33.624 15:19:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.882 15:19:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:33.882 15:19:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:33.882 15:19:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.882 15:19:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.882 15:19:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.882 15:19:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:33.882 15:19:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:33.882 15:19:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.882 15:19:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.882 [2024-11-20 15:19:20.338928] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:33.882 15:19:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.882 15:19:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:33.882 15:19:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:33.882 15:19:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:33.882 15:19:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:33.882 15:19:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:33.882 15:19:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:33.882 15:19:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:33.883 15:19:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:33.883 15:19:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:33.883 15:19:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:33.883 15:19:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:33.883 15:19:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.883 15:19:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.883 15:19:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:34.140 15:19:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.140 15:19:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:34.140 "name": "Existed_Raid", 00:11:34.141 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:34.141 "strip_size_kb": 0, 00:11:34.141 "state": "configuring", 00:11:34.141 "raid_level": "raid1", 00:11:34.141 "superblock": false, 00:11:34.141 "num_base_bdevs": 4, 00:11:34.141 "num_base_bdevs_discovered": 3, 00:11:34.141 "num_base_bdevs_operational": 4, 00:11:34.141 "base_bdevs_list": [ 00:11:34.141 { 00:11:34.141 "name": null, 00:11:34.141 "uuid": "8907f8e1-b3c4-414c-a5f6-a8c8e2acc61b", 00:11:34.141 "is_configured": false, 00:11:34.141 "data_offset": 0, 00:11:34.141 "data_size": 65536 00:11:34.141 }, 00:11:34.141 { 00:11:34.141 "name": "BaseBdev2", 00:11:34.141 "uuid": "9822ad87-f5d6-43da-b058-f15d4b86b645", 00:11:34.141 "is_configured": true, 00:11:34.141 "data_offset": 0, 00:11:34.141 "data_size": 65536 00:11:34.141 }, 00:11:34.141 { 00:11:34.141 "name": "BaseBdev3", 00:11:34.141 "uuid": "d5e88233-0c84-4498-b049-26d7c1aba717", 00:11:34.141 "is_configured": true, 00:11:34.141 "data_offset": 0, 00:11:34.141 "data_size": 65536 00:11:34.141 }, 00:11:34.141 { 00:11:34.141 "name": "BaseBdev4", 00:11:34.141 "uuid": "485226fe-6799-40e3-bd38-87dafa42a655", 00:11:34.141 "is_configured": true, 00:11:34.141 "data_offset": 0, 00:11:34.141 "data_size": 65536 00:11:34.141 } 00:11:34.141 ] 00:11:34.141 }' 00:11:34.141 15:19:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:34.141 15:19:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.400 15:19:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:34.400 15:19:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:34.400 15:19:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.400 15:19:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.400 15:19:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.400 15:19:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:34.400 15:19:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:34.400 15:19:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:34.400 15:19:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.400 15:19:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.400 15:19:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.400 15:19:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 8907f8e1-b3c4-414c-a5f6-a8c8e2acc61b 00:11:34.400 15:19:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.400 15:19:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.400 [2024-11-20 15:19:20.865741] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:34.400 [2024-11-20 15:19:20.865790] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:34.400 [2024-11-20 15:19:20.865802] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:11:34.400 [2024-11-20 15:19:20.866072] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:11:34.400 [2024-11-20 15:19:20.866222] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:34.400 [2024-11-20 15:19:20.866232] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:11:34.400 [2024-11-20 15:19:20.866473] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:34.400 NewBaseBdev 00:11:34.400 15:19:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.400 15:19:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:34.400 15:19:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:11:34.400 15:19:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:34.400 15:19:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:34.400 15:19:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:34.400 15:19:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:34.400 15:19:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:34.400 15:19:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.400 15:19:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.400 15:19:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.400 15:19:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:34.400 15:19:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.400 15:19:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.660 [ 00:11:34.660 { 00:11:34.660 "name": "NewBaseBdev", 00:11:34.660 "aliases": [ 00:11:34.660 "8907f8e1-b3c4-414c-a5f6-a8c8e2acc61b" 00:11:34.660 ], 00:11:34.660 "product_name": "Malloc disk", 00:11:34.660 "block_size": 512, 00:11:34.660 "num_blocks": 65536, 00:11:34.660 "uuid": "8907f8e1-b3c4-414c-a5f6-a8c8e2acc61b", 00:11:34.660 "assigned_rate_limits": { 00:11:34.660 "rw_ios_per_sec": 0, 00:11:34.660 "rw_mbytes_per_sec": 0, 00:11:34.660 "r_mbytes_per_sec": 0, 00:11:34.660 "w_mbytes_per_sec": 0 00:11:34.660 }, 00:11:34.660 "claimed": true, 00:11:34.660 "claim_type": "exclusive_write", 00:11:34.660 "zoned": false, 00:11:34.660 "supported_io_types": { 00:11:34.660 "read": true, 00:11:34.660 "write": true, 00:11:34.660 "unmap": true, 00:11:34.660 "flush": true, 00:11:34.660 "reset": true, 00:11:34.660 "nvme_admin": false, 00:11:34.660 "nvme_io": false, 00:11:34.660 "nvme_io_md": false, 00:11:34.660 "write_zeroes": true, 00:11:34.660 "zcopy": true, 00:11:34.660 "get_zone_info": false, 00:11:34.660 "zone_management": false, 00:11:34.660 "zone_append": false, 00:11:34.660 "compare": false, 00:11:34.660 "compare_and_write": false, 00:11:34.660 "abort": true, 00:11:34.660 "seek_hole": false, 00:11:34.660 "seek_data": false, 00:11:34.660 "copy": true, 00:11:34.660 "nvme_iov_md": false 00:11:34.660 }, 00:11:34.660 "memory_domains": [ 00:11:34.660 { 00:11:34.660 "dma_device_id": "system", 00:11:34.660 "dma_device_type": 1 00:11:34.660 }, 00:11:34.660 { 00:11:34.660 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:34.660 "dma_device_type": 2 00:11:34.660 } 00:11:34.660 ], 00:11:34.660 "driver_specific": {} 00:11:34.660 } 00:11:34.660 ] 00:11:34.660 15:19:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.660 15:19:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:34.660 15:19:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:11:34.660 15:19:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:34.660 15:19:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:34.660 15:19:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:34.660 15:19:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:34.660 15:19:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:34.660 15:19:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:34.660 15:19:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:34.660 15:19:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:34.660 15:19:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:34.660 15:19:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:34.660 15:19:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:34.660 15:19:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.660 15:19:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.660 15:19:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.660 15:19:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:34.660 "name": "Existed_Raid", 00:11:34.660 "uuid": "f868f5b5-d512-4210-88cc-ff0953983dbe", 00:11:34.660 "strip_size_kb": 0, 00:11:34.660 "state": "online", 00:11:34.660 "raid_level": "raid1", 00:11:34.660 "superblock": false, 00:11:34.660 "num_base_bdevs": 4, 00:11:34.660 "num_base_bdevs_discovered": 4, 00:11:34.660 "num_base_bdevs_operational": 4, 00:11:34.660 "base_bdevs_list": [ 00:11:34.660 { 00:11:34.660 "name": "NewBaseBdev", 00:11:34.660 "uuid": "8907f8e1-b3c4-414c-a5f6-a8c8e2acc61b", 00:11:34.660 "is_configured": true, 00:11:34.660 "data_offset": 0, 00:11:34.660 "data_size": 65536 00:11:34.660 }, 00:11:34.660 { 00:11:34.660 "name": "BaseBdev2", 00:11:34.660 "uuid": "9822ad87-f5d6-43da-b058-f15d4b86b645", 00:11:34.660 "is_configured": true, 00:11:34.660 "data_offset": 0, 00:11:34.660 "data_size": 65536 00:11:34.660 }, 00:11:34.660 { 00:11:34.660 "name": "BaseBdev3", 00:11:34.660 "uuid": "d5e88233-0c84-4498-b049-26d7c1aba717", 00:11:34.660 "is_configured": true, 00:11:34.660 "data_offset": 0, 00:11:34.660 "data_size": 65536 00:11:34.660 }, 00:11:34.660 { 00:11:34.660 "name": "BaseBdev4", 00:11:34.660 "uuid": "485226fe-6799-40e3-bd38-87dafa42a655", 00:11:34.660 "is_configured": true, 00:11:34.660 "data_offset": 0, 00:11:34.660 "data_size": 65536 00:11:34.660 } 00:11:34.660 ] 00:11:34.660 }' 00:11:34.661 15:19:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:34.661 15:19:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.920 15:19:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:34.920 15:19:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:34.920 15:19:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:34.920 15:19:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:34.920 15:19:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:34.920 15:19:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:34.920 15:19:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:34.920 15:19:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:34.920 15:19:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.920 15:19:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.920 [2024-11-20 15:19:21.321441] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:34.920 15:19:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.920 15:19:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:34.920 "name": "Existed_Raid", 00:11:34.920 "aliases": [ 00:11:34.920 "f868f5b5-d512-4210-88cc-ff0953983dbe" 00:11:34.920 ], 00:11:34.920 "product_name": "Raid Volume", 00:11:34.920 "block_size": 512, 00:11:34.920 "num_blocks": 65536, 00:11:34.920 "uuid": "f868f5b5-d512-4210-88cc-ff0953983dbe", 00:11:34.920 "assigned_rate_limits": { 00:11:34.920 "rw_ios_per_sec": 0, 00:11:34.920 "rw_mbytes_per_sec": 0, 00:11:34.920 "r_mbytes_per_sec": 0, 00:11:34.920 "w_mbytes_per_sec": 0 00:11:34.920 }, 00:11:34.920 "claimed": false, 00:11:34.920 "zoned": false, 00:11:34.920 "supported_io_types": { 00:11:34.920 "read": true, 00:11:34.920 "write": true, 00:11:34.920 "unmap": false, 00:11:34.920 "flush": false, 00:11:34.920 "reset": true, 00:11:34.920 "nvme_admin": false, 00:11:34.920 "nvme_io": false, 00:11:34.920 "nvme_io_md": false, 00:11:34.920 "write_zeroes": true, 00:11:34.920 "zcopy": false, 00:11:34.920 "get_zone_info": false, 00:11:34.920 "zone_management": false, 00:11:34.920 "zone_append": false, 00:11:34.920 "compare": false, 00:11:34.920 "compare_and_write": false, 00:11:34.920 "abort": false, 00:11:34.920 "seek_hole": false, 00:11:34.920 "seek_data": false, 00:11:34.920 "copy": false, 00:11:34.920 "nvme_iov_md": false 00:11:34.920 }, 00:11:34.920 "memory_domains": [ 00:11:34.920 { 00:11:34.920 "dma_device_id": "system", 00:11:34.920 "dma_device_type": 1 00:11:34.920 }, 00:11:34.920 { 00:11:34.920 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:34.920 "dma_device_type": 2 00:11:34.920 }, 00:11:34.920 { 00:11:34.920 "dma_device_id": "system", 00:11:34.920 "dma_device_type": 1 00:11:34.920 }, 00:11:34.920 { 00:11:34.920 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:34.920 "dma_device_type": 2 00:11:34.920 }, 00:11:34.920 { 00:11:34.920 "dma_device_id": "system", 00:11:34.920 "dma_device_type": 1 00:11:34.920 }, 00:11:34.920 { 00:11:34.920 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:34.920 "dma_device_type": 2 00:11:34.920 }, 00:11:34.920 { 00:11:34.920 "dma_device_id": "system", 00:11:34.920 "dma_device_type": 1 00:11:34.920 }, 00:11:34.920 { 00:11:34.920 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:34.920 "dma_device_type": 2 00:11:34.920 } 00:11:34.920 ], 00:11:34.920 "driver_specific": { 00:11:34.920 "raid": { 00:11:34.920 "uuid": "f868f5b5-d512-4210-88cc-ff0953983dbe", 00:11:34.920 "strip_size_kb": 0, 00:11:34.920 "state": "online", 00:11:34.920 "raid_level": "raid1", 00:11:34.920 "superblock": false, 00:11:34.920 "num_base_bdevs": 4, 00:11:34.920 "num_base_bdevs_discovered": 4, 00:11:34.920 "num_base_bdevs_operational": 4, 00:11:34.920 "base_bdevs_list": [ 00:11:34.920 { 00:11:34.920 "name": "NewBaseBdev", 00:11:34.920 "uuid": "8907f8e1-b3c4-414c-a5f6-a8c8e2acc61b", 00:11:34.920 "is_configured": true, 00:11:34.920 "data_offset": 0, 00:11:34.920 "data_size": 65536 00:11:34.920 }, 00:11:34.920 { 00:11:34.920 "name": "BaseBdev2", 00:11:34.920 "uuid": "9822ad87-f5d6-43da-b058-f15d4b86b645", 00:11:34.920 "is_configured": true, 00:11:34.920 "data_offset": 0, 00:11:34.920 "data_size": 65536 00:11:34.920 }, 00:11:34.920 { 00:11:34.920 "name": "BaseBdev3", 00:11:34.920 "uuid": "d5e88233-0c84-4498-b049-26d7c1aba717", 00:11:34.920 "is_configured": true, 00:11:34.920 "data_offset": 0, 00:11:34.920 "data_size": 65536 00:11:34.920 }, 00:11:34.920 { 00:11:34.920 "name": "BaseBdev4", 00:11:34.920 "uuid": "485226fe-6799-40e3-bd38-87dafa42a655", 00:11:34.920 "is_configured": true, 00:11:34.920 "data_offset": 0, 00:11:34.920 "data_size": 65536 00:11:34.920 } 00:11:34.920 ] 00:11:34.920 } 00:11:34.920 } 00:11:34.920 }' 00:11:34.920 15:19:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:34.920 15:19:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:34.920 BaseBdev2 00:11:34.920 BaseBdev3 00:11:34.920 BaseBdev4' 00:11:34.920 15:19:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:35.180 15:19:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:35.180 15:19:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:35.180 15:19:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:35.180 15:19:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:35.180 15:19:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.180 15:19:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.180 15:19:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.180 15:19:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:35.180 15:19:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:35.180 15:19:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:35.180 15:19:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:35.180 15:19:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:35.180 15:19:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.180 15:19:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.180 15:19:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.180 15:19:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:35.180 15:19:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:35.180 15:19:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:35.180 15:19:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:35.180 15:19:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:35.180 15:19:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.180 15:19:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.180 15:19:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.180 15:19:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:35.180 15:19:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:35.180 15:19:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:35.180 15:19:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:35.180 15:19:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:35.180 15:19:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.180 15:19:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.180 15:19:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.180 15:19:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:35.180 15:19:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:35.180 15:19:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:35.180 15:19:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.180 15:19:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.180 [2024-11-20 15:19:21.640791] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:35.180 [2024-11-20 15:19:21.640823] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:35.180 [2024-11-20 15:19:21.640911] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:35.180 [2024-11-20 15:19:21.641212] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:35.180 [2024-11-20 15:19:21.641237] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:11:35.180 15:19:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.180 15:19:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 73005 00:11:35.180 15:19:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 73005 ']' 00:11:35.180 15:19:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 73005 00:11:35.180 15:19:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:11:35.180 15:19:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:35.180 15:19:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73005 00:11:35.439 killing process with pid 73005 00:11:35.439 15:19:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:35.439 15:19:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:35.439 15:19:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73005' 00:11:35.439 15:19:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 73005 00:11:35.439 [2024-11-20 15:19:21.696333] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:35.439 15:19:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 73005 00:11:35.697 [2024-11-20 15:19:22.095594] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:37.074 15:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:11:37.074 00:11:37.074 real 0m11.238s 00:11:37.074 user 0m17.799s 00:11:37.074 sys 0m2.378s 00:11:37.074 15:19:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:37.074 15:19:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.074 ************************************ 00:11:37.074 END TEST raid_state_function_test 00:11:37.074 ************************************ 00:11:37.074 15:19:23 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:11:37.074 15:19:23 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:37.074 15:19:23 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:37.074 15:19:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:37.074 ************************************ 00:11:37.074 START TEST raid_state_function_test_sb 00:11:37.074 ************************************ 00:11:37.074 15:19:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 4 true 00:11:37.074 15:19:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:11:37.074 15:19:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:11:37.074 15:19:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:11:37.074 15:19:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:37.074 15:19:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:37.074 15:19:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:37.074 15:19:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:37.074 15:19:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:37.074 15:19:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:37.074 15:19:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:37.074 15:19:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:37.074 15:19:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:37.074 15:19:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:37.074 15:19:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:37.074 15:19:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:37.074 15:19:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:11:37.074 15:19:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:37.074 15:19:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:37.074 15:19:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:37.074 15:19:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:37.074 15:19:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:37.074 15:19:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:37.074 15:19:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:37.074 15:19:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:37.074 15:19:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:11:37.074 15:19:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:11:37.074 15:19:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:11:37.074 15:19:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:11:37.074 15:19:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=73671 00:11:37.074 Process raid pid: 73671 00:11:37.074 15:19:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 73671' 00:11:37.074 15:19:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:37.074 15:19:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 73671 00:11:37.074 15:19:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 73671 ']' 00:11:37.074 15:19:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:37.074 15:19:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:37.074 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:37.074 15:19:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:37.074 15:19:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:37.074 15:19:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.074 [2024-11-20 15:19:23.419891] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:11:37.074 [2024-11-20 15:19:23.420070] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:37.334 [2024-11-20 15:19:23.602989] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:37.334 [2024-11-20 15:19:23.727711] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:37.592 [2024-11-20 15:19:23.950833] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:37.593 [2024-11-20 15:19:23.950875] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:37.852 15:19:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:37.852 15:19:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:11:37.852 15:19:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:37.852 15:19:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.852 15:19:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.852 [2024-11-20 15:19:24.273865] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:37.852 [2024-11-20 15:19:24.273925] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:37.852 [2024-11-20 15:19:24.273936] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:37.852 [2024-11-20 15:19:24.273949] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:37.852 [2024-11-20 15:19:24.273957] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:37.852 [2024-11-20 15:19:24.273969] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:37.852 [2024-11-20 15:19:24.273977] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:37.852 [2024-11-20 15:19:24.273989] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:37.852 15:19:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.852 15:19:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:37.852 15:19:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:37.852 15:19:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:37.852 15:19:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:37.852 15:19:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:37.852 15:19:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:37.852 15:19:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:37.852 15:19:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:37.852 15:19:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:37.852 15:19:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:37.852 15:19:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:37.852 15:19:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:37.852 15:19:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.852 15:19:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.852 15:19:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.852 15:19:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:37.852 "name": "Existed_Raid", 00:11:37.852 "uuid": "7f835a27-4474-4bdf-b39a-4b40552607d4", 00:11:37.852 "strip_size_kb": 0, 00:11:37.852 "state": "configuring", 00:11:37.852 "raid_level": "raid1", 00:11:37.852 "superblock": true, 00:11:37.852 "num_base_bdevs": 4, 00:11:37.852 "num_base_bdevs_discovered": 0, 00:11:37.852 "num_base_bdevs_operational": 4, 00:11:37.852 "base_bdevs_list": [ 00:11:37.852 { 00:11:37.852 "name": "BaseBdev1", 00:11:37.852 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:37.852 "is_configured": false, 00:11:37.852 "data_offset": 0, 00:11:37.852 "data_size": 0 00:11:37.852 }, 00:11:37.852 { 00:11:37.852 "name": "BaseBdev2", 00:11:37.852 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:37.852 "is_configured": false, 00:11:37.852 "data_offset": 0, 00:11:37.852 "data_size": 0 00:11:37.852 }, 00:11:37.852 { 00:11:37.852 "name": "BaseBdev3", 00:11:37.852 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:37.852 "is_configured": false, 00:11:37.852 "data_offset": 0, 00:11:37.852 "data_size": 0 00:11:37.852 }, 00:11:37.852 { 00:11:37.852 "name": "BaseBdev4", 00:11:37.852 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:37.852 "is_configured": false, 00:11:37.852 "data_offset": 0, 00:11:37.852 "data_size": 0 00:11:37.852 } 00:11:37.852 ] 00:11:37.852 }' 00:11:37.852 15:19:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:37.852 15:19:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.421 15:19:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:38.421 15:19:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.421 15:19:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.421 [2024-11-20 15:19:24.701830] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:38.421 [2024-11-20 15:19:24.701872] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:38.421 15:19:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.421 15:19:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:38.421 15:19:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.421 15:19:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.421 [2024-11-20 15:19:24.713813] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:38.421 [2024-11-20 15:19:24.713861] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:38.421 [2024-11-20 15:19:24.713872] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:38.421 [2024-11-20 15:19:24.713885] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:38.421 [2024-11-20 15:19:24.713893] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:38.421 [2024-11-20 15:19:24.713905] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:38.421 [2024-11-20 15:19:24.713913] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:38.421 [2024-11-20 15:19:24.713924] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:38.421 15:19:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.421 15:19:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:38.421 15:19:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.421 15:19:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.421 [2024-11-20 15:19:24.759880] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:38.421 BaseBdev1 00:11:38.421 15:19:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.421 15:19:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:38.421 15:19:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:38.421 15:19:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:38.421 15:19:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:38.421 15:19:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:38.421 15:19:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:38.421 15:19:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:38.421 15:19:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.421 15:19:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.421 15:19:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.421 15:19:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:38.421 15:19:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.421 15:19:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.421 [ 00:11:38.421 { 00:11:38.421 "name": "BaseBdev1", 00:11:38.421 "aliases": [ 00:11:38.421 "54588fd0-0ce3-4ae0-9daa-7d57296d7493" 00:11:38.421 ], 00:11:38.421 "product_name": "Malloc disk", 00:11:38.422 "block_size": 512, 00:11:38.422 "num_blocks": 65536, 00:11:38.422 "uuid": "54588fd0-0ce3-4ae0-9daa-7d57296d7493", 00:11:38.422 "assigned_rate_limits": { 00:11:38.422 "rw_ios_per_sec": 0, 00:11:38.422 "rw_mbytes_per_sec": 0, 00:11:38.422 "r_mbytes_per_sec": 0, 00:11:38.422 "w_mbytes_per_sec": 0 00:11:38.422 }, 00:11:38.422 "claimed": true, 00:11:38.422 "claim_type": "exclusive_write", 00:11:38.422 "zoned": false, 00:11:38.422 "supported_io_types": { 00:11:38.422 "read": true, 00:11:38.422 "write": true, 00:11:38.422 "unmap": true, 00:11:38.422 "flush": true, 00:11:38.422 "reset": true, 00:11:38.422 "nvme_admin": false, 00:11:38.422 "nvme_io": false, 00:11:38.422 "nvme_io_md": false, 00:11:38.422 "write_zeroes": true, 00:11:38.422 "zcopy": true, 00:11:38.422 "get_zone_info": false, 00:11:38.422 "zone_management": false, 00:11:38.422 "zone_append": false, 00:11:38.422 "compare": false, 00:11:38.422 "compare_and_write": false, 00:11:38.422 "abort": true, 00:11:38.422 "seek_hole": false, 00:11:38.422 "seek_data": false, 00:11:38.422 "copy": true, 00:11:38.422 "nvme_iov_md": false 00:11:38.422 }, 00:11:38.422 "memory_domains": [ 00:11:38.422 { 00:11:38.422 "dma_device_id": "system", 00:11:38.422 "dma_device_type": 1 00:11:38.422 }, 00:11:38.422 { 00:11:38.422 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:38.422 "dma_device_type": 2 00:11:38.422 } 00:11:38.422 ], 00:11:38.422 "driver_specific": {} 00:11:38.422 } 00:11:38.422 ] 00:11:38.422 15:19:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.422 15:19:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:38.422 15:19:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:38.422 15:19:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:38.422 15:19:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:38.422 15:19:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:38.422 15:19:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:38.422 15:19:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:38.422 15:19:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:38.422 15:19:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:38.422 15:19:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:38.422 15:19:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:38.422 15:19:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:38.422 15:19:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:38.422 15:19:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.422 15:19:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.422 15:19:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.422 15:19:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:38.422 "name": "Existed_Raid", 00:11:38.422 "uuid": "a142ac18-a5d7-44d0-8e06-2111e66b380a", 00:11:38.422 "strip_size_kb": 0, 00:11:38.422 "state": "configuring", 00:11:38.422 "raid_level": "raid1", 00:11:38.422 "superblock": true, 00:11:38.422 "num_base_bdevs": 4, 00:11:38.422 "num_base_bdevs_discovered": 1, 00:11:38.422 "num_base_bdevs_operational": 4, 00:11:38.422 "base_bdevs_list": [ 00:11:38.422 { 00:11:38.422 "name": "BaseBdev1", 00:11:38.422 "uuid": "54588fd0-0ce3-4ae0-9daa-7d57296d7493", 00:11:38.422 "is_configured": true, 00:11:38.422 "data_offset": 2048, 00:11:38.422 "data_size": 63488 00:11:38.422 }, 00:11:38.422 { 00:11:38.422 "name": "BaseBdev2", 00:11:38.422 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:38.422 "is_configured": false, 00:11:38.422 "data_offset": 0, 00:11:38.422 "data_size": 0 00:11:38.422 }, 00:11:38.422 { 00:11:38.422 "name": "BaseBdev3", 00:11:38.422 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:38.422 "is_configured": false, 00:11:38.422 "data_offset": 0, 00:11:38.422 "data_size": 0 00:11:38.422 }, 00:11:38.422 { 00:11:38.422 "name": "BaseBdev4", 00:11:38.422 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:38.422 "is_configured": false, 00:11:38.422 "data_offset": 0, 00:11:38.422 "data_size": 0 00:11:38.422 } 00:11:38.422 ] 00:11:38.422 }' 00:11:38.422 15:19:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:38.422 15:19:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.991 15:19:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:38.991 15:19:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.991 15:19:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.991 [2024-11-20 15:19:25.215324] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:38.991 [2024-11-20 15:19:25.215550] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:38.991 15:19:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.991 15:19:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:38.991 15:19:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.991 15:19:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.991 [2024-11-20 15:19:25.223373] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:38.991 [2024-11-20 15:19:25.225581] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:38.991 [2024-11-20 15:19:25.225626] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:38.991 [2024-11-20 15:19:25.225637] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:38.991 [2024-11-20 15:19:25.225651] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:38.991 [2024-11-20 15:19:25.225674] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:38.991 [2024-11-20 15:19:25.225687] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:38.991 15:19:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.991 15:19:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:38.991 15:19:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:38.991 15:19:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:38.991 15:19:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:38.991 15:19:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:38.991 15:19:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:38.991 15:19:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:38.991 15:19:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:38.991 15:19:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:38.991 15:19:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:38.991 15:19:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:38.991 15:19:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:38.991 15:19:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:38.991 15:19:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:38.991 15:19:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.991 15:19:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.991 15:19:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.991 15:19:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:38.991 "name": "Existed_Raid", 00:11:38.991 "uuid": "ffeb84bb-29fe-440c-a3f8-95c872ac3aff", 00:11:38.991 "strip_size_kb": 0, 00:11:38.991 "state": "configuring", 00:11:38.991 "raid_level": "raid1", 00:11:38.991 "superblock": true, 00:11:38.991 "num_base_bdevs": 4, 00:11:38.991 "num_base_bdevs_discovered": 1, 00:11:38.991 "num_base_bdevs_operational": 4, 00:11:38.991 "base_bdevs_list": [ 00:11:38.991 { 00:11:38.991 "name": "BaseBdev1", 00:11:38.991 "uuid": "54588fd0-0ce3-4ae0-9daa-7d57296d7493", 00:11:38.991 "is_configured": true, 00:11:38.991 "data_offset": 2048, 00:11:38.991 "data_size": 63488 00:11:38.991 }, 00:11:38.991 { 00:11:38.991 "name": "BaseBdev2", 00:11:38.991 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:38.991 "is_configured": false, 00:11:38.991 "data_offset": 0, 00:11:38.991 "data_size": 0 00:11:38.991 }, 00:11:38.991 { 00:11:38.991 "name": "BaseBdev3", 00:11:38.991 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:38.991 "is_configured": false, 00:11:38.991 "data_offset": 0, 00:11:38.991 "data_size": 0 00:11:38.991 }, 00:11:38.991 { 00:11:38.991 "name": "BaseBdev4", 00:11:38.991 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:38.991 "is_configured": false, 00:11:38.991 "data_offset": 0, 00:11:38.991 "data_size": 0 00:11:38.991 } 00:11:38.991 ] 00:11:38.991 }' 00:11:38.991 15:19:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:38.991 15:19:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.251 15:19:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:39.251 15:19:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.251 15:19:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.251 [2024-11-20 15:19:25.687813] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:39.251 BaseBdev2 00:11:39.251 15:19:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.251 15:19:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:39.251 15:19:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:39.251 15:19:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:39.251 15:19:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:39.251 15:19:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:39.251 15:19:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:39.251 15:19:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:39.251 15:19:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.251 15:19:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.251 15:19:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.251 15:19:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:39.251 15:19:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.251 15:19:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.251 [ 00:11:39.251 { 00:11:39.251 "name": "BaseBdev2", 00:11:39.251 "aliases": [ 00:11:39.251 "edf6fb41-28a0-4c8a-95b0-d378d498d928" 00:11:39.251 ], 00:11:39.251 "product_name": "Malloc disk", 00:11:39.251 "block_size": 512, 00:11:39.251 "num_blocks": 65536, 00:11:39.251 "uuid": "edf6fb41-28a0-4c8a-95b0-d378d498d928", 00:11:39.251 "assigned_rate_limits": { 00:11:39.251 "rw_ios_per_sec": 0, 00:11:39.251 "rw_mbytes_per_sec": 0, 00:11:39.251 "r_mbytes_per_sec": 0, 00:11:39.251 "w_mbytes_per_sec": 0 00:11:39.251 }, 00:11:39.251 "claimed": true, 00:11:39.251 "claim_type": "exclusive_write", 00:11:39.251 "zoned": false, 00:11:39.251 "supported_io_types": { 00:11:39.251 "read": true, 00:11:39.251 "write": true, 00:11:39.251 "unmap": true, 00:11:39.251 "flush": true, 00:11:39.251 "reset": true, 00:11:39.251 "nvme_admin": false, 00:11:39.251 "nvme_io": false, 00:11:39.251 "nvme_io_md": false, 00:11:39.251 "write_zeroes": true, 00:11:39.251 "zcopy": true, 00:11:39.251 "get_zone_info": false, 00:11:39.251 "zone_management": false, 00:11:39.251 "zone_append": false, 00:11:39.251 "compare": false, 00:11:39.251 "compare_and_write": false, 00:11:39.251 "abort": true, 00:11:39.251 "seek_hole": false, 00:11:39.251 "seek_data": false, 00:11:39.251 "copy": true, 00:11:39.251 "nvme_iov_md": false 00:11:39.251 }, 00:11:39.251 "memory_domains": [ 00:11:39.251 { 00:11:39.510 "dma_device_id": "system", 00:11:39.510 "dma_device_type": 1 00:11:39.510 }, 00:11:39.510 { 00:11:39.510 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:39.510 "dma_device_type": 2 00:11:39.510 } 00:11:39.510 ], 00:11:39.510 "driver_specific": {} 00:11:39.510 } 00:11:39.510 ] 00:11:39.510 15:19:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.510 15:19:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:39.510 15:19:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:39.510 15:19:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:39.510 15:19:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:39.510 15:19:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:39.510 15:19:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:39.510 15:19:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:39.510 15:19:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:39.510 15:19:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:39.510 15:19:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:39.510 15:19:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:39.510 15:19:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:39.510 15:19:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:39.510 15:19:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:39.510 15:19:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:39.510 15:19:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.510 15:19:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.510 15:19:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.510 15:19:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:39.510 "name": "Existed_Raid", 00:11:39.510 "uuid": "ffeb84bb-29fe-440c-a3f8-95c872ac3aff", 00:11:39.511 "strip_size_kb": 0, 00:11:39.511 "state": "configuring", 00:11:39.511 "raid_level": "raid1", 00:11:39.511 "superblock": true, 00:11:39.511 "num_base_bdevs": 4, 00:11:39.511 "num_base_bdevs_discovered": 2, 00:11:39.511 "num_base_bdevs_operational": 4, 00:11:39.511 "base_bdevs_list": [ 00:11:39.511 { 00:11:39.511 "name": "BaseBdev1", 00:11:39.511 "uuid": "54588fd0-0ce3-4ae0-9daa-7d57296d7493", 00:11:39.511 "is_configured": true, 00:11:39.511 "data_offset": 2048, 00:11:39.511 "data_size": 63488 00:11:39.511 }, 00:11:39.511 { 00:11:39.511 "name": "BaseBdev2", 00:11:39.511 "uuid": "edf6fb41-28a0-4c8a-95b0-d378d498d928", 00:11:39.511 "is_configured": true, 00:11:39.511 "data_offset": 2048, 00:11:39.511 "data_size": 63488 00:11:39.511 }, 00:11:39.511 { 00:11:39.511 "name": "BaseBdev3", 00:11:39.511 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:39.511 "is_configured": false, 00:11:39.511 "data_offset": 0, 00:11:39.511 "data_size": 0 00:11:39.511 }, 00:11:39.511 { 00:11:39.511 "name": "BaseBdev4", 00:11:39.511 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:39.511 "is_configured": false, 00:11:39.511 "data_offset": 0, 00:11:39.511 "data_size": 0 00:11:39.511 } 00:11:39.511 ] 00:11:39.511 }' 00:11:39.511 15:19:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:39.511 15:19:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.769 15:19:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:39.769 15:19:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.769 15:19:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.769 [2024-11-20 15:19:26.242553] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:39.769 BaseBdev3 00:11:39.769 15:19:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.769 15:19:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:39.769 15:19:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:39.769 15:19:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:39.769 15:19:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:39.769 15:19:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:39.769 15:19:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:39.769 15:19:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:39.769 15:19:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.769 15:19:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.028 15:19:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.028 15:19:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:40.028 15:19:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.028 15:19:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.028 [ 00:11:40.028 { 00:11:40.028 "name": "BaseBdev3", 00:11:40.028 "aliases": [ 00:11:40.028 "f2b207c7-0d70-4c86-a8a5-8f2e74bb38cb" 00:11:40.028 ], 00:11:40.028 "product_name": "Malloc disk", 00:11:40.028 "block_size": 512, 00:11:40.028 "num_blocks": 65536, 00:11:40.028 "uuid": "f2b207c7-0d70-4c86-a8a5-8f2e74bb38cb", 00:11:40.028 "assigned_rate_limits": { 00:11:40.028 "rw_ios_per_sec": 0, 00:11:40.028 "rw_mbytes_per_sec": 0, 00:11:40.028 "r_mbytes_per_sec": 0, 00:11:40.028 "w_mbytes_per_sec": 0 00:11:40.028 }, 00:11:40.028 "claimed": true, 00:11:40.028 "claim_type": "exclusive_write", 00:11:40.028 "zoned": false, 00:11:40.028 "supported_io_types": { 00:11:40.028 "read": true, 00:11:40.028 "write": true, 00:11:40.028 "unmap": true, 00:11:40.028 "flush": true, 00:11:40.028 "reset": true, 00:11:40.028 "nvme_admin": false, 00:11:40.028 "nvme_io": false, 00:11:40.028 "nvme_io_md": false, 00:11:40.028 "write_zeroes": true, 00:11:40.028 "zcopy": true, 00:11:40.028 "get_zone_info": false, 00:11:40.028 "zone_management": false, 00:11:40.028 "zone_append": false, 00:11:40.028 "compare": false, 00:11:40.028 "compare_and_write": false, 00:11:40.028 "abort": true, 00:11:40.028 "seek_hole": false, 00:11:40.028 "seek_data": false, 00:11:40.028 "copy": true, 00:11:40.028 "nvme_iov_md": false 00:11:40.028 }, 00:11:40.028 "memory_domains": [ 00:11:40.028 { 00:11:40.028 "dma_device_id": "system", 00:11:40.028 "dma_device_type": 1 00:11:40.028 }, 00:11:40.028 { 00:11:40.028 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:40.028 "dma_device_type": 2 00:11:40.028 } 00:11:40.028 ], 00:11:40.028 "driver_specific": {} 00:11:40.028 } 00:11:40.028 ] 00:11:40.028 15:19:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.028 15:19:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:40.028 15:19:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:40.028 15:19:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:40.028 15:19:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:40.028 15:19:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:40.028 15:19:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:40.028 15:19:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:40.028 15:19:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:40.028 15:19:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:40.028 15:19:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:40.028 15:19:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:40.028 15:19:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:40.028 15:19:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:40.028 15:19:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:40.028 15:19:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.028 15:19:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.028 15:19:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:40.028 15:19:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.028 15:19:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:40.028 "name": "Existed_Raid", 00:11:40.028 "uuid": "ffeb84bb-29fe-440c-a3f8-95c872ac3aff", 00:11:40.028 "strip_size_kb": 0, 00:11:40.028 "state": "configuring", 00:11:40.028 "raid_level": "raid1", 00:11:40.028 "superblock": true, 00:11:40.028 "num_base_bdevs": 4, 00:11:40.028 "num_base_bdevs_discovered": 3, 00:11:40.028 "num_base_bdevs_operational": 4, 00:11:40.028 "base_bdevs_list": [ 00:11:40.028 { 00:11:40.028 "name": "BaseBdev1", 00:11:40.028 "uuid": "54588fd0-0ce3-4ae0-9daa-7d57296d7493", 00:11:40.028 "is_configured": true, 00:11:40.028 "data_offset": 2048, 00:11:40.028 "data_size": 63488 00:11:40.028 }, 00:11:40.028 { 00:11:40.028 "name": "BaseBdev2", 00:11:40.028 "uuid": "edf6fb41-28a0-4c8a-95b0-d378d498d928", 00:11:40.028 "is_configured": true, 00:11:40.028 "data_offset": 2048, 00:11:40.028 "data_size": 63488 00:11:40.028 }, 00:11:40.028 { 00:11:40.028 "name": "BaseBdev3", 00:11:40.028 "uuid": "f2b207c7-0d70-4c86-a8a5-8f2e74bb38cb", 00:11:40.028 "is_configured": true, 00:11:40.028 "data_offset": 2048, 00:11:40.029 "data_size": 63488 00:11:40.029 }, 00:11:40.029 { 00:11:40.029 "name": "BaseBdev4", 00:11:40.029 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:40.029 "is_configured": false, 00:11:40.029 "data_offset": 0, 00:11:40.029 "data_size": 0 00:11:40.029 } 00:11:40.029 ] 00:11:40.029 }' 00:11:40.029 15:19:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:40.029 15:19:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.287 15:19:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:40.287 15:19:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.287 15:19:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.287 [2024-11-20 15:19:26.761506] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:40.287 [2024-11-20 15:19:26.761810] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:40.287 [2024-11-20 15:19:26.761827] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:40.287 [2024-11-20 15:19:26.762114] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:40.287 [2024-11-20 15:19:26.762261] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:40.287 [2024-11-20 15:19:26.762282] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:40.287 [2024-11-20 15:19:26.762421] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:40.287 BaseBdev4 00:11:40.287 15:19:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.287 15:19:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:11:40.287 15:19:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:40.287 15:19:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:40.287 15:19:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:40.287 15:19:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:40.287 15:19:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:40.287 15:19:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:40.287 15:19:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.287 15:19:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.546 15:19:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.546 15:19:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:40.546 15:19:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.546 15:19:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.546 [ 00:11:40.546 { 00:11:40.546 "name": "BaseBdev4", 00:11:40.546 "aliases": [ 00:11:40.546 "293fc381-0657-4a5d-bc51-d17f8a2af336" 00:11:40.546 ], 00:11:40.546 "product_name": "Malloc disk", 00:11:40.546 "block_size": 512, 00:11:40.546 "num_blocks": 65536, 00:11:40.546 "uuid": "293fc381-0657-4a5d-bc51-d17f8a2af336", 00:11:40.546 "assigned_rate_limits": { 00:11:40.546 "rw_ios_per_sec": 0, 00:11:40.546 "rw_mbytes_per_sec": 0, 00:11:40.546 "r_mbytes_per_sec": 0, 00:11:40.546 "w_mbytes_per_sec": 0 00:11:40.546 }, 00:11:40.546 "claimed": true, 00:11:40.546 "claim_type": "exclusive_write", 00:11:40.546 "zoned": false, 00:11:40.546 "supported_io_types": { 00:11:40.546 "read": true, 00:11:40.546 "write": true, 00:11:40.546 "unmap": true, 00:11:40.546 "flush": true, 00:11:40.546 "reset": true, 00:11:40.546 "nvme_admin": false, 00:11:40.546 "nvme_io": false, 00:11:40.546 "nvme_io_md": false, 00:11:40.546 "write_zeroes": true, 00:11:40.546 "zcopy": true, 00:11:40.546 "get_zone_info": false, 00:11:40.546 "zone_management": false, 00:11:40.546 "zone_append": false, 00:11:40.546 "compare": false, 00:11:40.546 "compare_and_write": false, 00:11:40.546 "abort": true, 00:11:40.546 "seek_hole": false, 00:11:40.546 "seek_data": false, 00:11:40.546 "copy": true, 00:11:40.546 "nvme_iov_md": false 00:11:40.546 }, 00:11:40.546 "memory_domains": [ 00:11:40.546 { 00:11:40.546 "dma_device_id": "system", 00:11:40.546 "dma_device_type": 1 00:11:40.546 }, 00:11:40.546 { 00:11:40.546 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:40.546 "dma_device_type": 2 00:11:40.546 } 00:11:40.546 ], 00:11:40.546 "driver_specific": {} 00:11:40.546 } 00:11:40.546 ] 00:11:40.546 15:19:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.546 15:19:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:40.546 15:19:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:40.546 15:19:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:40.546 15:19:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:11:40.546 15:19:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:40.546 15:19:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:40.546 15:19:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:40.546 15:19:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:40.546 15:19:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:40.546 15:19:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:40.546 15:19:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:40.546 15:19:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:40.546 15:19:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:40.546 15:19:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:40.546 15:19:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.546 15:19:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:40.546 15:19:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.546 15:19:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.546 15:19:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:40.546 "name": "Existed_Raid", 00:11:40.546 "uuid": "ffeb84bb-29fe-440c-a3f8-95c872ac3aff", 00:11:40.546 "strip_size_kb": 0, 00:11:40.546 "state": "online", 00:11:40.546 "raid_level": "raid1", 00:11:40.546 "superblock": true, 00:11:40.546 "num_base_bdevs": 4, 00:11:40.546 "num_base_bdevs_discovered": 4, 00:11:40.546 "num_base_bdevs_operational": 4, 00:11:40.546 "base_bdevs_list": [ 00:11:40.546 { 00:11:40.546 "name": "BaseBdev1", 00:11:40.546 "uuid": "54588fd0-0ce3-4ae0-9daa-7d57296d7493", 00:11:40.546 "is_configured": true, 00:11:40.546 "data_offset": 2048, 00:11:40.546 "data_size": 63488 00:11:40.546 }, 00:11:40.546 { 00:11:40.546 "name": "BaseBdev2", 00:11:40.546 "uuid": "edf6fb41-28a0-4c8a-95b0-d378d498d928", 00:11:40.546 "is_configured": true, 00:11:40.546 "data_offset": 2048, 00:11:40.546 "data_size": 63488 00:11:40.546 }, 00:11:40.546 { 00:11:40.547 "name": "BaseBdev3", 00:11:40.547 "uuid": "f2b207c7-0d70-4c86-a8a5-8f2e74bb38cb", 00:11:40.547 "is_configured": true, 00:11:40.547 "data_offset": 2048, 00:11:40.547 "data_size": 63488 00:11:40.547 }, 00:11:40.547 { 00:11:40.547 "name": "BaseBdev4", 00:11:40.547 "uuid": "293fc381-0657-4a5d-bc51-d17f8a2af336", 00:11:40.547 "is_configured": true, 00:11:40.547 "data_offset": 2048, 00:11:40.547 "data_size": 63488 00:11:40.547 } 00:11:40.547 ] 00:11:40.547 }' 00:11:40.547 15:19:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:40.547 15:19:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.806 15:19:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:40.806 15:19:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:40.806 15:19:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:40.806 15:19:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:40.806 15:19:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:40.806 15:19:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:40.806 15:19:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:40.806 15:19:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:40.806 15:19:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.806 15:19:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.806 [2024-11-20 15:19:27.269159] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:41.064 15:19:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.064 15:19:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:41.064 "name": "Existed_Raid", 00:11:41.064 "aliases": [ 00:11:41.064 "ffeb84bb-29fe-440c-a3f8-95c872ac3aff" 00:11:41.064 ], 00:11:41.064 "product_name": "Raid Volume", 00:11:41.064 "block_size": 512, 00:11:41.064 "num_blocks": 63488, 00:11:41.064 "uuid": "ffeb84bb-29fe-440c-a3f8-95c872ac3aff", 00:11:41.064 "assigned_rate_limits": { 00:11:41.064 "rw_ios_per_sec": 0, 00:11:41.064 "rw_mbytes_per_sec": 0, 00:11:41.064 "r_mbytes_per_sec": 0, 00:11:41.064 "w_mbytes_per_sec": 0 00:11:41.065 }, 00:11:41.065 "claimed": false, 00:11:41.065 "zoned": false, 00:11:41.065 "supported_io_types": { 00:11:41.065 "read": true, 00:11:41.065 "write": true, 00:11:41.065 "unmap": false, 00:11:41.065 "flush": false, 00:11:41.065 "reset": true, 00:11:41.065 "nvme_admin": false, 00:11:41.065 "nvme_io": false, 00:11:41.065 "nvme_io_md": false, 00:11:41.065 "write_zeroes": true, 00:11:41.065 "zcopy": false, 00:11:41.065 "get_zone_info": false, 00:11:41.065 "zone_management": false, 00:11:41.065 "zone_append": false, 00:11:41.065 "compare": false, 00:11:41.065 "compare_and_write": false, 00:11:41.065 "abort": false, 00:11:41.065 "seek_hole": false, 00:11:41.065 "seek_data": false, 00:11:41.065 "copy": false, 00:11:41.065 "nvme_iov_md": false 00:11:41.065 }, 00:11:41.065 "memory_domains": [ 00:11:41.065 { 00:11:41.065 "dma_device_id": "system", 00:11:41.065 "dma_device_type": 1 00:11:41.065 }, 00:11:41.065 { 00:11:41.065 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:41.065 "dma_device_type": 2 00:11:41.065 }, 00:11:41.065 { 00:11:41.065 "dma_device_id": "system", 00:11:41.065 "dma_device_type": 1 00:11:41.065 }, 00:11:41.065 { 00:11:41.065 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:41.065 "dma_device_type": 2 00:11:41.065 }, 00:11:41.065 { 00:11:41.065 "dma_device_id": "system", 00:11:41.065 "dma_device_type": 1 00:11:41.065 }, 00:11:41.065 { 00:11:41.065 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:41.065 "dma_device_type": 2 00:11:41.065 }, 00:11:41.065 { 00:11:41.065 "dma_device_id": "system", 00:11:41.065 "dma_device_type": 1 00:11:41.065 }, 00:11:41.065 { 00:11:41.065 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:41.065 "dma_device_type": 2 00:11:41.065 } 00:11:41.065 ], 00:11:41.065 "driver_specific": { 00:11:41.065 "raid": { 00:11:41.065 "uuid": "ffeb84bb-29fe-440c-a3f8-95c872ac3aff", 00:11:41.065 "strip_size_kb": 0, 00:11:41.065 "state": "online", 00:11:41.065 "raid_level": "raid1", 00:11:41.065 "superblock": true, 00:11:41.065 "num_base_bdevs": 4, 00:11:41.065 "num_base_bdevs_discovered": 4, 00:11:41.065 "num_base_bdevs_operational": 4, 00:11:41.065 "base_bdevs_list": [ 00:11:41.065 { 00:11:41.065 "name": "BaseBdev1", 00:11:41.065 "uuid": "54588fd0-0ce3-4ae0-9daa-7d57296d7493", 00:11:41.065 "is_configured": true, 00:11:41.065 "data_offset": 2048, 00:11:41.065 "data_size": 63488 00:11:41.065 }, 00:11:41.065 { 00:11:41.065 "name": "BaseBdev2", 00:11:41.065 "uuid": "edf6fb41-28a0-4c8a-95b0-d378d498d928", 00:11:41.065 "is_configured": true, 00:11:41.065 "data_offset": 2048, 00:11:41.065 "data_size": 63488 00:11:41.065 }, 00:11:41.065 { 00:11:41.065 "name": "BaseBdev3", 00:11:41.065 "uuid": "f2b207c7-0d70-4c86-a8a5-8f2e74bb38cb", 00:11:41.065 "is_configured": true, 00:11:41.065 "data_offset": 2048, 00:11:41.065 "data_size": 63488 00:11:41.065 }, 00:11:41.065 { 00:11:41.065 "name": "BaseBdev4", 00:11:41.065 "uuid": "293fc381-0657-4a5d-bc51-d17f8a2af336", 00:11:41.065 "is_configured": true, 00:11:41.065 "data_offset": 2048, 00:11:41.065 "data_size": 63488 00:11:41.065 } 00:11:41.065 ] 00:11:41.065 } 00:11:41.065 } 00:11:41.065 }' 00:11:41.065 15:19:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:41.065 15:19:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:41.065 BaseBdev2 00:11:41.065 BaseBdev3 00:11:41.065 BaseBdev4' 00:11:41.065 15:19:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:41.065 15:19:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:41.065 15:19:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:41.065 15:19:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:41.065 15:19:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.065 15:19:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.065 15:19:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:41.065 15:19:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.065 15:19:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:41.065 15:19:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:41.065 15:19:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:41.065 15:19:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:41.065 15:19:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:41.065 15:19:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.065 15:19:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.065 15:19:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.065 15:19:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:41.065 15:19:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:41.065 15:19:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:41.065 15:19:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:41.065 15:19:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:41.065 15:19:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.065 15:19:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.065 15:19:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.065 15:19:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:41.065 15:19:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:41.065 15:19:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:41.065 15:19:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:41.065 15:19:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:41.065 15:19:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.065 15:19:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.325 15:19:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.325 15:19:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:41.325 15:19:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:41.325 15:19:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:41.325 15:19:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.325 15:19:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.325 [2024-11-20 15:19:27.564515] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:41.325 15:19:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.325 15:19:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:41.325 15:19:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:11:41.325 15:19:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:41.325 15:19:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:11:41.325 15:19:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:11:41.325 15:19:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:11:41.325 15:19:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:41.325 15:19:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:41.325 15:19:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:41.325 15:19:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:41.325 15:19:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:41.325 15:19:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:41.325 15:19:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:41.325 15:19:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:41.325 15:19:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:41.325 15:19:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:41.325 15:19:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.325 15:19:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.325 15:19:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:41.325 15:19:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.325 15:19:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:41.325 "name": "Existed_Raid", 00:11:41.325 "uuid": "ffeb84bb-29fe-440c-a3f8-95c872ac3aff", 00:11:41.325 "strip_size_kb": 0, 00:11:41.325 "state": "online", 00:11:41.325 "raid_level": "raid1", 00:11:41.325 "superblock": true, 00:11:41.325 "num_base_bdevs": 4, 00:11:41.325 "num_base_bdevs_discovered": 3, 00:11:41.325 "num_base_bdevs_operational": 3, 00:11:41.325 "base_bdevs_list": [ 00:11:41.325 { 00:11:41.326 "name": null, 00:11:41.326 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:41.326 "is_configured": false, 00:11:41.326 "data_offset": 0, 00:11:41.326 "data_size": 63488 00:11:41.326 }, 00:11:41.326 { 00:11:41.326 "name": "BaseBdev2", 00:11:41.326 "uuid": "edf6fb41-28a0-4c8a-95b0-d378d498d928", 00:11:41.326 "is_configured": true, 00:11:41.326 "data_offset": 2048, 00:11:41.326 "data_size": 63488 00:11:41.326 }, 00:11:41.326 { 00:11:41.326 "name": "BaseBdev3", 00:11:41.326 "uuid": "f2b207c7-0d70-4c86-a8a5-8f2e74bb38cb", 00:11:41.326 "is_configured": true, 00:11:41.326 "data_offset": 2048, 00:11:41.326 "data_size": 63488 00:11:41.326 }, 00:11:41.326 { 00:11:41.326 "name": "BaseBdev4", 00:11:41.326 "uuid": "293fc381-0657-4a5d-bc51-d17f8a2af336", 00:11:41.326 "is_configured": true, 00:11:41.326 "data_offset": 2048, 00:11:41.326 "data_size": 63488 00:11:41.326 } 00:11:41.326 ] 00:11:41.326 }' 00:11:41.326 15:19:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:41.326 15:19:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.894 15:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:41.894 15:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:41.894 15:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:41.894 15:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:41.894 15:19:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.894 15:19:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.894 15:19:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.894 15:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:41.894 15:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:41.894 15:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:41.894 15:19:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.894 15:19:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.894 [2024-11-20 15:19:28.137080] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:41.894 15:19:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.894 15:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:41.894 15:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:41.894 15:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:41.894 15:19:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.894 15:19:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.894 15:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:41.894 15:19:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.894 15:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:41.894 15:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:41.894 15:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:41.894 15:19:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.894 15:19:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.894 [2024-11-20 15:19:28.288835] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:42.157 15:19:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.157 15:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:42.157 15:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:42.157 15:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:42.157 15:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:42.157 15:19:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.157 15:19:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.157 15:19:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.157 15:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:42.157 15:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:42.157 15:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:11:42.157 15:19:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.157 15:19:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.157 [2024-11-20 15:19:28.433235] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:11:42.157 [2024-11-20 15:19:28.433345] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:42.157 [2024-11-20 15:19:28.529758] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:42.157 [2024-11-20 15:19:28.529823] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:42.157 [2024-11-20 15:19:28.529838] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:42.158 15:19:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.158 15:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:42.158 15:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:42.158 15:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:42.158 15:19:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.158 15:19:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.158 15:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:42.158 15:19:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.158 15:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:42.158 15:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:42.158 15:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:11:42.158 15:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:42.158 15:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:42.158 15:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:42.158 15:19:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.158 15:19:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.158 BaseBdev2 00:11:42.158 15:19:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.158 15:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:42.158 15:19:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:42.158 15:19:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:42.158 15:19:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:42.158 15:19:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:42.158 15:19:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:42.158 15:19:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:42.158 15:19:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.158 15:19:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.417 15:19:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.417 15:19:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:42.417 15:19:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.417 15:19:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.417 [ 00:11:42.417 { 00:11:42.417 "name": "BaseBdev2", 00:11:42.417 "aliases": [ 00:11:42.417 "c2342769-8f7a-4c32-a4c1-84397001b58a" 00:11:42.417 ], 00:11:42.417 "product_name": "Malloc disk", 00:11:42.417 "block_size": 512, 00:11:42.417 "num_blocks": 65536, 00:11:42.417 "uuid": "c2342769-8f7a-4c32-a4c1-84397001b58a", 00:11:42.417 "assigned_rate_limits": { 00:11:42.417 "rw_ios_per_sec": 0, 00:11:42.417 "rw_mbytes_per_sec": 0, 00:11:42.417 "r_mbytes_per_sec": 0, 00:11:42.417 "w_mbytes_per_sec": 0 00:11:42.417 }, 00:11:42.417 "claimed": false, 00:11:42.417 "zoned": false, 00:11:42.417 "supported_io_types": { 00:11:42.417 "read": true, 00:11:42.417 "write": true, 00:11:42.417 "unmap": true, 00:11:42.417 "flush": true, 00:11:42.417 "reset": true, 00:11:42.417 "nvme_admin": false, 00:11:42.417 "nvme_io": false, 00:11:42.417 "nvme_io_md": false, 00:11:42.417 "write_zeroes": true, 00:11:42.417 "zcopy": true, 00:11:42.417 "get_zone_info": false, 00:11:42.417 "zone_management": false, 00:11:42.417 "zone_append": false, 00:11:42.417 "compare": false, 00:11:42.417 "compare_and_write": false, 00:11:42.417 "abort": true, 00:11:42.417 "seek_hole": false, 00:11:42.417 "seek_data": false, 00:11:42.417 "copy": true, 00:11:42.417 "nvme_iov_md": false 00:11:42.417 }, 00:11:42.417 "memory_domains": [ 00:11:42.417 { 00:11:42.417 "dma_device_id": "system", 00:11:42.417 "dma_device_type": 1 00:11:42.417 }, 00:11:42.417 { 00:11:42.417 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:42.417 "dma_device_type": 2 00:11:42.417 } 00:11:42.417 ], 00:11:42.417 "driver_specific": {} 00:11:42.417 } 00:11:42.417 ] 00:11:42.417 15:19:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.417 15:19:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:42.417 15:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:42.417 15:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:42.417 15:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:42.417 15:19:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.417 15:19:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.417 BaseBdev3 00:11:42.417 15:19:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.417 15:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:42.417 15:19:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:42.417 15:19:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:42.417 15:19:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:42.417 15:19:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:42.417 15:19:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:42.418 15:19:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:42.418 15:19:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.418 15:19:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.418 15:19:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.418 15:19:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:42.418 15:19:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.418 15:19:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.418 [ 00:11:42.418 { 00:11:42.418 "name": "BaseBdev3", 00:11:42.418 "aliases": [ 00:11:42.418 "e651cc82-ef48-4811-9e58-30de206f8e47" 00:11:42.418 ], 00:11:42.418 "product_name": "Malloc disk", 00:11:42.418 "block_size": 512, 00:11:42.418 "num_blocks": 65536, 00:11:42.418 "uuid": "e651cc82-ef48-4811-9e58-30de206f8e47", 00:11:42.418 "assigned_rate_limits": { 00:11:42.418 "rw_ios_per_sec": 0, 00:11:42.418 "rw_mbytes_per_sec": 0, 00:11:42.418 "r_mbytes_per_sec": 0, 00:11:42.418 "w_mbytes_per_sec": 0 00:11:42.418 }, 00:11:42.418 "claimed": false, 00:11:42.418 "zoned": false, 00:11:42.418 "supported_io_types": { 00:11:42.418 "read": true, 00:11:42.418 "write": true, 00:11:42.418 "unmap": true, 00:11:42.418 "flush": true, 00:11:42.418 "reset": true, 00:11:42.418 "nvme_admin": false, 00:11:42.418 "nvme_io": false, 00:11:42.418 "nvme_io_md": false, 00:11:42.418 "write_zeroes": true, 00:11:42.418 "zcopy": true, 00:11:42.418 "get_zone_info": false, 00:11:42.418 "zone_management": false, 00:11:42.418 "zone_append": false, 00:11:42.418 "compare": false, 00:11:42.418 "compare_and_write": false, 00:11:42.418 "abort": true, 00:11:42.418 "seek_hole": false, 00:11:42.418 "seek_data": false, 00:11:42.418 "copy": true, 00:11:42.418 "nvme_iov_md": false 00:11:42.418 }, 00:11:42.418 "memory_domains": [ 00:11:42.418 { 00:11:42.418 "dma_device_id": "system", 00:11:42.418 "dma_device_type": 1 00:11:42.418 }, 00:11:42.418 { 00:11:42.418 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:42.418 "dma_device_type": 2 00:11:42.418 } 00:11:42.418 ], 00:11:42.418 "driver_specific": {} 00:11:42.418 } 00:11:42.418 ] 00:11:42.418 15:19:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.418 15:19:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:42.418 15:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:42.418 15:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:42.418 15:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:42.418 15:19:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.418 15:19:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.418 BaseBdev4 00:11:42.418 15:19:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.418 15:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:11:42.418 15:19:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:42.418 15:19:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:42.418 15:19:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:42.418 15:19:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:42.418 15:19:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:42.418 15:19:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:42.418 15:19:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.418 15:19:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.418 15:19:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.418 15:19:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:42.418 15:19:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.418 15:19:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.418 [ 00:11:42.418 { 00:11:42.418 "name": "BaseBdev4", 00:11:42.418 "aliases": [ 00:11:42.418 "9cfba1e5-8790-49c1-8a96-80eb56cd3c79" 00:11:42.418 ], 00:11:42.418 "product_name": "Malloc disk", 00:11:42.418 "block_size": 512, 00:11:42.418 "num_blocks": 65536, 00:11:42.418 "uuid": "9cfba1e5-8790-49c1-8a96-80eb56cd3c79", 00:11:42.418 "assigned_rate_limits": { 00:11:42.418 "rw_ios_per_sec": 0, 00:11:42.418 "rw_mbytes_per_sec": 0, 00:11:42.418 "r_mbytes_per_sec": 0, 00:11:42.418 "w_mbytes_per_sec": 0 00:11:42.418 }, 00:11:42.418 "claimed": false, 00:11:42.418 "zoned": false, 00:11:42.418 "supported_io_types": { 00:11:42.418 "read": true, 00:11:42.418 "write": true, 00:11:42.418 "unmap": true, 00:11:42.418 "flush": true, 00:11:42.418 "reset": true, 00:11:42.418 "nvme_admin": false, 00:11:42.418 "nvme_io": false, 00:11:42.418 "nvme_io_md": false, 00:11:42.418 "write_zeroes": true, 00:11:42.418 "zcopy": true, 00:11:42.418 "get_zone_info": false, 00:11:42.418 "zone_management": false, 00:11:42.418 "zone_append": false, 00:11:42.418 "compare": false, 00:11:42.418 "compare_and_write": false, 00:11:42.418 "abort": true, 00:11:42.418 "seek_hole": false, 00:11:42.418 "seek_data": false, 00:11:42.418 "copy": true, 00:11:42.418 "nvme_iov_md": false 00:11:42.418 }, 00:11:42.418 "memory_domains": [ 00:11:42.418 { 00:11:42.418 "dma_device_id": "system", 00:11:42.418 "dma_device_type": 1 00:11:42.418 }, 00:11:42.418 { 00:11:42.418 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:42.418 "dma_device_type": 2 00:11:42.418 } 00:11:42.418 ], 00:11:42.418 "driver_specific": {} 00:11:42.418 } 00:11:42.418 ] 00:11:42.418 15:19:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.418 15:19:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:42.418 15:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:42.418 15:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:42.418 15:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:42.418 15:19:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.418 15:19:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.418 [2024-11-20 15:19:28.853784] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:42.418 [2024-11-20 15:19:28.853837] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:42.418 [2024-11-20 15:19:28.853864] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:42.418 [2024-11-20 15:19:28.856003] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:42.418 [2024-11-20 15:19:28.856058] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:42.418 15:19:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.418 15:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:42.418 15:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:42.418 15:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:42.418 15:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:42.418 15:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:42.418 15:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:42.418 15:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:42.418 15:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:42.418 15:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:42.418 15:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:42.418 15:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:42.418 15:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:42.418 15:19:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.418 15:19:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.418 15:19:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.678 15:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:42.678 "name": "Existed_Raid", 00:11:42.678 "uuid": "3ad1d86b-4d66-4202-aede-bfbb3391b2d4", 00:11:42.678 "strip_size_kb": 0, 00:11:42.678 "state": "configuring", 00:11:42.678 "raid_level": "raid1", 00:11:42.678 "superblock": true, 00:11:42.678 "num_base_bdevs": 4, 00:11:42.678 "num_base_bdevs_discovered": 3, 00:11:42.678 "num_base_bdevs_operational": 4, 00:11:42.678 "base_bdevs_list": [ 00:11:42.678 { 00:11:42.678 "name": "BaseBdev1", 00:11:42.678 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:42.678 "is_configured": false, 00:11:42.678 "data_offset": 0, 00:11:42.678 "data_size": 0 00:11:42.678 }, 00:11:42.678 { 00:11:42.678 "name": "BaseBdev2", 00:11:42.678 "uuid": "c2342769-8f7a-4c32-a4c1-84397001b58a", 00:11:42.678 "is_configured": true, 00:11:42.678 "data_offset": 2048, 00:11:42.678 "data_size": 63488 00:11:42.678 }, 00:11:42.678 { 00:11:42.678 "name": "BaseBdev3", 00:11:42.678 "uuid": "e651cc82-ef48-4811-9e58-30de206f8e47", 00:11:42.678 "is_configured": true, 00:11:42.678 "data_offset": 2048, 00:11:42.678 "data_size": 63488 00:11:42.678 }, 00:11:42.678 { 00:11:42.678 "name": "BaseBdev4", 00:11:42.678 "uuid": "9cfba1e5-8790-49c1-8a96-80eb56cd3c79", 00:11:42.678 "is_configured": true, 00:11:42.678 "data_offset": 2048, 00:11:42.678 "data_size": 63488 00:11:42.678 } 00:11:42.678 ] 00:11:42.678 }' 00:11:42.678 15:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:42.678 15:19:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.936 15:19:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:42.936 15:19:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.936 15:19:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.936 [2024-11-20 15:19:29.309107] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:42.936 15:19:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.936 15:19:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:42.936 15:19:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:42.936 15:19:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:42.936 15:19:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:42.936 15:19:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:42.936 15:19:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:42.936 15:19:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:42.936 15:19:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:42.936 15:19:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:42.936 15:19:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:42.936 15:19:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:42.936 15:19:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.936 15:19:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.936 15:19:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:42.936 15:19:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.936 15:19:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:42.936 "name": "Existed_Raid", 00:11:42.936 "uuid": "3ad1d86b-4d66-4202-aede-bfbb3391b2d4", 00:11:42.936 "strip_size_kb": 0, 00:11:42.936 "state": "configuring", 00:11:42.936 "raid_level": "raid1", 00:11:42.936 "superblock": true, 00:11:42.936 "num_base_bdevs": 4, 00:11:42.936 "num_base_bdevs_discovered": 2, 00:11:42.936 "num_base_bdevs_operational": 4, 00:11:42.936 "base_bdevs_list": [ 00:11:42.936 { 00:11:42.936 "name": "BaseBdev1", 00:11:42.936 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:42.936 "is_configured": false, 00:11:42.936 "data_offset": 0, 00:11:42.936 "data_size": 0 00:11:42.936 }, 00:11:42.936 { 00:11:42.936 "name": null, 00:11:42.936 "uuid": "c2342769-8f7a-4c32-a4c1-84397001b58a", 00:11:42.936 "is_configured": false, 00:11:42.936 "data_offset": 0, 00:11:42.936 "data_size": 63488 00:11:42.936 }, 00:11:42.936 { 00:11:42.936 "name": "BaseBdev3", 00:11:42.936 "uuid": "e651cc82-ef48-4811-9e58-30de206f8e47", 00:11:42.936 "is_configured": true, 00:11:42.936 "data_offset": 2048, 00:11:42.936 "data_size": 63488 00:11:42.936 }, 00:11:42.936 { 00:11:42.936 "name": "BaseBdev4", 00:11:42.936 "uuid": "9cfba1e5-8790-49c1-8a96-80eb56cd3c79", 00:11:42.936 "is_configured": true, 00:11:42.936 "data_offset": 2048, 00:11:42.936 "data_size": 63488 00:11:42.936 } 00:11:42.936 ] 00:11:42.936 }' 00:11:42.936 15:19:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:42.936 15:19:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:43.517 15:19:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:43.517 15:19:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:43.517 15:19:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.517 15:19:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:43.517 15:19:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.517 15:19:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:43.517 15:19:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:43.517 15:19:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.517 15:19:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:43.517 [2024-11-20 15:19:29.847195] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:43.517 BaseBdev1 00:11:43.517 15:19:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.517 15:19:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:43.517 15:19:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:43.517 15:19:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:43.517 15:19:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:43.517 15:19:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:43.517 15:19:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:43.517 15:19:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:43.517 15:19:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.517 15:19:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:43.517 15:19:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.517 15:19:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:43.517 15:19:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.517 15:19:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:43.517 [ 00:11:43.517 { 00:11:43.517 "name": "BaseBdev1", 00:11:43.517 "aliases": [ 00:11:43.517 "5ec1d418-0051-4b2f-b591-3de9c1f355ef" 00:11:43.517 ], 00:11:43.517 "product_name": "Malloc disk", 00:11:43.517 "block_size": 512, 00:11:43.517 "num_blocks": 65536, 00:11:43.517 "uuid": "5ec1d418-0051-4b2f-b591-3de9c1f355ef", 00:11:43.517 "assigned_rate_limits": { 00:11:43.517 "rw_ios_per_sec": 0, 00:11:43.517 "rw_mbytes_per_sec": 0, 00:11:43.517 "r_mbytes_per_sec": 0, 00:11:43.517 "w_mbytes_per_sec": 0 00:11:43.517 }, 00:11:43.517 "claimed": true, 00:11:43.517 "claim_type": "exclusive_write", 00:11:43.517 "zoned": false, 00:11:43.517 "supported_io_types": { 00:11:43.517 "read": true, 00:11:43.517 "write": true, 00:11:43.517 "unmap": true, 00:11:43.517 "flush": true, 00:11:43.517 "reset": true, 00:11:43.518 "nvme_admin": false, 00:11:43.518 "nvme_io": false, 00:11:43.518 "nvme_io_md": false, 00:11:43.518 "write_zeroes": true, 00:11:43.518 "zcopy": true, 00:11:43.518 "get_zone_info": false, 00:11:43.518 "zone_management": false, 00:11:43.518 "zone_append": false, 00:11:43.518 "compare": false, 00:11:43.518 "compare_and_write": false, 00:11:43.518 "abort": true, 00:11:43.518 "seek_hole": false, 00:11:43.518 "seek_data": false, 00:11:43.518 "copy": true, 00:11:43.518 "nvme_iov_md": false 00:11:43.518 }, 00:11:43.518 "memory_domains": [ 00:11:43.518 { 00:11:43.518 "dma_device_id": "system", 00:11:43.518 "dma_device_type": 1 00:11:43.518 }, 00:11:43.518 { 00:11:43.518 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:43.518 "dma_device_type": 2 00:11:43.518 } 00:11:43.518 ], 00:11:43.518 "driver_specific": {} 00:11:43.518 } 00:11:43.518 ] 00:11:43.518 15:19:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.518 15:19:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:43.518 15:19:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:43.518 15:19:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:43.518 15:19:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:43.518 15:19:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:43.518 15:19:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:43.518 15:19:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:43.518 15:19:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:43.518 15:19:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:43.518 15:19:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:43.518 15:19:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:43.518 15:19:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:43.518 15:19:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:43.518 15:19:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.518 15:19:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:43.518 15:19:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.518 15:19:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:43.518 "name": "Existed_Raid", 00:11:43.518 "uuid": "3ad1d86b-4d66-4202-aede-bfbb3391b2d4", 00:11:43.518 "strip_size_kb": 0, 00:11:43.518 "state": "configuring", 00:11:43.518 "raid_level": "raid1", 00:11:43.518 "superblock": true, 00:11:43.518 "num_base_bdevs": 4, 00:11:43.518 "num_base_bdevs_discovered": 3, 00:11:43.518 "num_base_bdevs_operational": 4, 00:11:43.518 "base_bdevs_list": [ 00:11:43.518 { 00:11:43.518 "name": "BaseBdev1", 00:11:43.518 "uuid": "5ec1d418-0051-4b2f-b591-3de9c1f355ef", 00:11:43.518 "is_configured": true, 00:11:43.518 "data_offset": 2048, 00:11:43.518 "data_size": 63488 00:11:43.518 }, 00:11:43.518 { 00:11:43.518 "name": null, 00:11:43.518 "uuid": "c2342769-8f7a-4c32-a4c1-84397001b58a", 00:11:43.518 "is_configured": false, 00:11:43.518 "data_offset": 0, 00:11:43.518 "data_size": 63488 00:11:43.518 }, 00:11:43.518 { 00:11:43.518 "name": "BaseBdev3", 00:11:43.518 "uuid": "e651cc82-ef48-4811-9e58-30de206f8e47", 00:11:43.518 "is_configured": true, 00:11:43.518 "data_offset": 2048, 00:11:43.518 "data_size": 63488 00:11:43.518 }, 00:11:43.518 { 00:11:43.518 "name": "BaseBdev4", 00:11:43.518 "uuid": "9cfba1e5-8790-49c1-8a96-80eb56cd3c79", 00:11:43.518 "is_configured": true, 00:11:43.518 "data_offset": 2048, 00:11:43.518 "data_size": 63488 00:11:43.518 } 00:11:43.518 ] 00:11:43.518 }' 00:11:43.518 15:19:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:43.518 15:19:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:44.086 15:19:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:44.086 15:19:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:44.086 15:19:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.086 15:19:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:44.086 15:19:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.086 15:19:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:44.086 15:19:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:44.086 15:19:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.086 15:19:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:44.086 [2024-11-20 15:19:30.382836] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:44.086 15:19:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.086 15:19:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:44.086 15:19:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:44.086 15:19:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:44.086 15:19:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:44.086 15:19:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:44.086 15:19:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:44.086 15:19:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:44.086 15:19:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:44.086 15:19:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:44.086 15:19:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:44.086 15:19:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:44.086 15:19:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.086 15:19:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:44.086 15:19:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:44.086 15:19:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.086 15:19:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:44.086 "name": "Existed_Raid", 00:11:44.086 "uuid": "3ad1d86b-4d66-4202-aede-bfbb3391b2d4", 00:11:44.086 "strip_size_kb": 0, 00:11:44.086 "state": "configuring", 00:11:44.086 "raid_level": "raid1", 00:11:44.086 "superblock": true, 00:11:44.086 "num_base_bdevs": 4, 00:11:44.086 "num_base_bdevs_discovered": 2, 00:11:44.086 "num_base_bdevs_operational": 4, 00:11:44.086 "base_bdevs_list": [ 00:11:44.086 { 00:11:44.086 "name": "BaseBdev1", 00:11:44.086 "uuid": "5ec1d418-0051-4b2f-b591-3de9c1f355ef", 00:11:44.086 "is_configured": true, 00:11:44.086 "data_offset": 2048, 00:11:44.086 "data_size": 63488 00:11:44.086 }, 00:11:44.086 { 00:11:44.086 "name": null, 00:11:44.086 "uuid": "c2342769-8f7a-4c32-a4c1-84397001b58a", 00:11:44.086 "is_configured": false, 00:11:44.086 "data_offset": 0, 00:11:44.086 "data_size": 63488 00:11:44.086 }, 00:11:44.086 { 00:11:44.086 "name": null, 00:11:44.086 "uuid": "e651cc82-ef48-4811-9e58-30de206f8e47", 00:11:44.086 "is_configured": false, 00:11:44.086 "data_offset": 0, 00:11:44.086 "data_size": 63488 00:11:44.086 }, 00:11:44.086 { 00:11:44.086 "name": "BaseBdev4", 00:11:44.086 "uuid": "9cfba1e5-8790-49c1-8a96-80eb56cd3c79", 00:11:44.086 "is_configured": true, 00:11:44.086 "data_offset": 2048, 00:11:44.086 "data_size": 63488 00:11:44.086 } 00:11:44.086 ] 00:11:44.086 }' 00:11:44.086 15:19:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:44.086 15:19:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:44.653 15:19:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:44.653 15:19:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.653 15:19:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:44.653 15:19:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:44.653 15:19:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.653 15:19:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:44.653 15:19:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:44.653 15:19:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.653 15:19:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:44.653 [2024-11-20 15:19:30.878155] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:44.653 15:19:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.653 15:19:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:44.653 15:19:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:44.653 15:19:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:44.653 15:19:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:44.653 15:19:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:44.653 15:19:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:44.653 15:19:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:44.653 15:19:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:44.653 15:19:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:44.653 15:19:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:44.653 15:19:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:44.653 15:19:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.653 15:19:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:44.653 15:19:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:44.653 15:19:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.653 15:19:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:44.653 "name": "Existed_Raid", 00:11:44.653 "uuid": "3ad1d86b-4d66-4202-aede-bfbb3391b2d4", 00:11:44.653 "strip_size_kb": 0, 00:11:44.653 "state": "configuring", 00:11:44.653 "raid_level": "raid1", 00:11:44.653 "superblock": true, 00:11:44.653 "num_base_bdevs": 4, 00:11:44.653 "num_base_bdevs_discovered": 3, 00:11:44.653 "num_base_bdevs_operational": 4, 00:11:44.653 "base_bdevs_list": [ 00:11:44.653 { 00:11:44.653 "name": "BaseBdev1", 00:11:44.653 "uuid": "5ec1d418-0051-4b2f-b591-3de9c1f355ef", 00:11:44.653 "is_configured": true, 00:11:44.653 "data_offset": 2048, 00:11:44.653 "data_size": 63488 00:11:44.653 }, 00:11:44.653 { 00:11:44.653 "name": null, 00:11:44.653 "uuid": "c2342769-8f7a-4c32-a4c1-84397001b58a", 00:11:44.653 "is_configured": false, 00:11:44.653 "data_offset": 0, 00:11:44.653 "data_size": 63488 00:11:44.653 }, 00:11:44.653 { 00:11:44.653 "name": "BaseBdev3", 00:11:44.653 "uuid": "e651cc82-ef48-4811-9e58-30de206f8e47", 00:11:44.653 "is_configured": true, 00:11:44.653 "data_offset": 2048, 00:11:44.653 "data_size": 63488 00:11:44.654 }, 00:11:44.654 { 00:11:44.654 "name": "BaseBdev4", 00:11:44.654 "uuid": "9cfba1e5-8790-49c1-8a96-80eb56cd3c79", 00:11:44.654 "is_configured": true, 00:11:44.654 "data_offset": 2048, 00:11:44.654 "data_size": 63488 00:11:44.654 } 00:11:44.654 ] 00:11:44.654 }' 00:11:44.654 15:19:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:44.654 15:19:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:44.912 15:19:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:44.912 15:19:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.912 15:19:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:44.912 15:19:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:44.912 15:19:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.912 15:19:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:44.912 15:19:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:44.912 15:19:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.912 15:19:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:44.912 [2024-11-20 15:19:31.361560] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:45.171 15:19:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.171 15:19:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:45.171 15:19:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:45.171 15:19:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:45.171 15:19:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:45.171 15:19:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:45.171 15:19:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:45.171 15:19:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:45.171 15:19:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:45.171 15:19:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:45.171 15:19:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:45.171 15:19:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:45.171 15:19:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:45.171 15:19:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.171 15:19:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:45.171 15:19:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.171 15:19:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:45.171 "name": "Existed_Raid", 00:11:45.171 "uuid": "3ad1d86b-4d66-4202-aede-bfbb3391b2d4", 00:11:45.171 "strip_size_kb": 0, 00:11:45.171 "state": "configuring", 00:11:45.171 "raid_level": "raid1", 00:11:45.171 "superblock": true, 00:11:45.171 "num_base_bdevs": 4, 00:11:45.171 "num_base_bdevs_discovered": 2, 00:11:45.171 "num_base_bdevs_operational": 4, 00:11:45.171 "base_bdevs_list": [ 00:11:45.171 { 00:11:45.171 "name": null, 00:11:45.171 "uuid": "5ec1d418-0051-4b2f-b591-3de9c1f355ef", 00:11:45.171 "is_configured": false, 00:11:45.171 "data_offset": 0, 00:11:45.171 "data_size": 63488 00:11:45.171 }, 00:11:45.171 { 00:11:45.171 "name": null, 00:11:45.171 "uuid": "c2342769-8f7a-4c32-a4c1-84397001b58a", 00:11:45.171 "is_configured": false, 00:11:45.171 "data_offset": 0, 00:11:45.171 "data_size": 63488 00:11:45.171 }, 00:11:45.171 { 00:11:45.171 "name": "BaseBdev3", 00:11:45.171 "uuid": "e651cc82-ef48-4811-9e58-30de206f8e47", 00:11:45.171 "is_configured": true, 00:11:45.171 "data_offset": 2048, 00:11:45.171 "data_size": 63488 00:11:45.171 }, 00:11:45.171 { 00:11:45.171 "name": "BaseBdev4", 00:11:45.171 "uuid": "9cfba1e5-8790-49c1-8a96-80eb56cd3c79", 00:11:45.171 "is_configured": true, 00:11:45.171 "data_offset": 2048, 00:11:45.171 "data_size": 63488 00:11:45.171 } 00:11:45.171 ] 00:11:45.171 }' 00:11:45.171 15:19:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:45.171 15:19:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:45.430 15:19:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:45.430 15:19:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:45.430 15:19:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.430 15:19:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:45.689 15:19:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.689 15:19:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:45.689 15:19:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:45.689 15:19:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.689 15:19:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:45.689 [2024-11-20 15:19:31.934961] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:45.689 15:19:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.689 15:19:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:45.689 15:19:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:45.689 15:19:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:45.689 15:19:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:45.689 15:19:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:45.689 15:19:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:45.689 15:19:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:45.689 15:19:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:45.689 15:19:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:45.689 15:19:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:45.689 15:19:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:45.689 15:19:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:45.689 15:19:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.689 15:19:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:45.689 15:19:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.689 15:19:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:45.689 "name": "Existed_Raid", 00:11:45.689 "uuid": "3ad1d86b-4d66-4202-aede-bfbb3391b2d4", 00:11:45.689 "strip_size_kb": 0, 00:11:45.689 "state": "configuring", 00:11:45.689 "raid_level": "raid1", 00:11:45.689 "superblock": true, 00:11:45.689 "num_base_bdevs": 4, 00:11:45.689 "num_base_bdevs_discovered": 3, 00:11:45.689 "num_base_bdevs_operational": 4, 00:11:45.689 "base_bdevs_list": [ 00:11:45.689 { 00:11:45.689 "name": null, 00:11:45.689 "uuid": "5ec1d418-0051-4b2f-b591-3de9c1f355ef", 00:11:45.689 "is_configured": false, 00:11:45.689 "data_offset": 0, 00:11:45.689 "data_size": 63488 00:11:45.689 }, 00:11:45.689 { 00:11:45.689 "name": "BaseBdev2", 00:11:45.689 "uuid": "c2342769-8f7a-4c32-a4c1-84397001b58a", 00:11:45.689 "is_configured": true, 00:11:45.689 "data_offset": 2048, 00:11:45.689 "data_size": 63488 00:11:45.689 }, 00:11:45.689 { 00:11:45.689 "name": "BaseBdev3", 00:11:45.689 "uuid": "e651cc82-ef48-4811-9e58-30de206f8e47", 00:11:45.689 "is_configured": true, 00:11:45.689 "data_offset": 2048, 00:11:45.689 "data_size": 63488 00:11:45.689 }, 00:11:45.689 { 00:11:45.689 "name": "BaseBdev4", 00:11:45.689 "uuid": "9cfba1e5-8790-49c1-8a96-80eb56cd3c79", 00:11:45.689 "is_configured": true, 00:11:45.689 "data_offset": 2048, 00:11:45.689 "data_size": 63488 00:11:45.689 } 00:11:45.689 ] 00:11:45.689 }' 00:11:45.689 15:19:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:45.689 15:19:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:45.948 15:19:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:45.948 15:19:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:45.948 15:19:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.948 15:19:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:45.948 15:19:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.948 15:19:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:45.948 15:19:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:45.948 15:19:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.948 15:19:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:45.948 15:19:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:45.948 15:19:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.948 15:19:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 5ec1d418-0051-4b2f-b591-3de9c1f355ef 00:11:45.948 15:19:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.948 15:19:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:46.207 [2024-11-20 15:19:32.456088] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:46.207 [2024-11-20 15:19:32.456388] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:46.207 [2024-11-20 15:19:32.456410] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:46.207 [2024-11-20 15:19:32.456700] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:11:46.207 [2024-11-20 15:19:32.456907] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:46.207 [2024-11-20 15:19:32.456924] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:11:46.207 [2024-11-20 15:19:32.457090] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:46.207 NewBaseBdev 00:11:46.207 15:19:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.207 15:19:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:46.207 15:19:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:11:46.207 15:19:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:46.207 15:19:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:46.207 15:19:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:46.207 15:19:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:46.207 15:19:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:46.207 15:19:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.207 15:19:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:46.207 15:19:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.207 15:19:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:46.207 15:19:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.207 15:19:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:46.207 [ 00:11:46.207 { 00:11:46.207 "name": "NewBaseBdev", 00:11:46.207 "aliases": [ 00:11:46.207 "5ec1d418-0051-4b2f-b591-3de9c1f355ef" 00:11:46.207 ], 00:11:46.207 "product_name": "Malloc disk", 00:11:46.207 "block_size": 512, 00:11:46.207 "num_blocks": 65536, 00:11:46.207 "uuid": "5ec1d418-0051-4b2f-b591-3de9c1f355ef", 00:11:46.207 "assigned_rate_limits": { 00:11:46.207 "rw_ios_per_sec": 0, 00:11:46.207 "rw_mbytes_per_sec": 0, 00:11:46.207 "r_mbytes_per_sec": 0, 00:11:46.207 "w_mbytes_per_sec": 0 00:11:46.207 }, 00:11:46.207 "claimed": true, 00:11:46.207 "claim_type": "exclusive_write", 00:11:46.207 "zoned": false, 00:11:46.207 "supported_io_types": { 00:11:46.207 "read": true, 00:11:46.207 "write": true, 00:11:46.207 "unmap": true, 00:11:46.207 "flush": true, 00:11:46.207 "reset": true, 00:11:46.207 "nvme_admin": false, 00:11:46.207 "nvme_io": false, 00:11:46.207 "nvme_io_md": false, 00:11:46.207 "write_zeroes": true, 00:11:46.207 "zcopy": true, 00:11:46.207 "get_zone_info": false, 00:11:46.207 "zone_management": false, 00:11:46.207 "zone_append": false, 00:11:46.207 "compare": false, 00:11:46.207 "compare_and_write": false, 00:11:46.207 "abort": true, 00:11:46.207 "seek_hole": false, 00:11:46.207 "seek_data": false, 00:11:46.207 "copy": true, 00:11:46.207 "nvme_iov_md": false 00:11:46.207 }, 00:11:46.207 "memory_domains": [ 00:11:46.207 { 00:11:46.207 "dma_device_id": "system", 00:11:46.207 "dma_device_type": 1 00:11:46.207 }, 00:11:46.207 { 00:11:46.207 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:46.207 "dma_device_type": 2 00:11:46.207 } 00:11:46.207 ], 00:11:46.207 "driver_specific": {} 00:11:46.207 } 00:11:46.207 ] 00:11:46.207 15:19:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.207 15:19:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:46.207 15:19:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:11:46.207 15:19:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:46.207 15:19:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:46.207 15:19:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:46.207 15:19:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:46.207 15:19:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:46.207 15:19:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:46.207 15:19:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:46.207 15:19:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:46.207 15:19:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:46.207 15:19:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:46.207 15:19:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.207 15:19:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:46.207 15:19:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:46.207 15:19:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.207 15:19:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:46.207 "name": "Existed_Raid", 00:11:46.207 "uuid": "3ad1d86b-4d66-4202-aede-bfbb3391b2d4", 00:11:46.207 "strip_size_kb": 0, 00:11:46.207 "state": "online", 00:11:46.207 "raid_level": "raid1", 00:11:46.207 "superblock": true, 00:11:46.207 "num_base_bdevs": 4, 00:11:46.207 "num_base_bdevs_discovered": 4, 00:11:46.207 "num_base_bdevs_operational": 4, 00:11:46.207 "base_bdevs_list": [ 00:11:46.207 { 00:11:46.207 "name": "NewBaseBdev", 00:11:46.207 "uuid": "5ec1d418-0051-4b2f-b591-3de9c1f355ef", 00:11:46.207 "is_configured": true, 00:11:46.207 "data_offset": 2048, 00:11:46.207 "data_size": 63488 00:11:46.207 }, 00:11:46.207 { 00:11:46.207 "name": "BaseBdev2", 00:11:46.207 "uuid": "c2342769-8f7a-4c32-a4c1-84397001b58a", 00:11:46.207 "is_configured": true, 00:11:46.207 "data_offset": 2048, 00:11:46.207 "data_size": 63488 00:11:46.207 }, 00:11:46.207 { 00:11:46.207 "name": "BaseBdev3", 00:11:46.207 "uuid": "e651cc82-ef48-4811-9e58-30de206f8e47", 00:11:46.207 "is_configured": true, 00:11:46.207 "data_offset": 2048, 00:11:46.207 "data_size": 63488 00:11:46.207 }, 00:11:46.207 { 00:11:46.207 "name": "BaseBdev4", 00:11:46.207 "uuid": "9cfba1e5-8790-49c1-8a96-80eb56cd3c79", 00:11:46.207 "is_configured": true, 00:11:46.207 "data_offset": 2048, 00:11:46.207 "data_size": 63488 00:11:46.207 } 00:11:46.207 ] 00:11:46.207 }' 00:11:46.207 15:19:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:46.207 15:19:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:46.465 15:19:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:46.465 15:19:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:46.465 15:19:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:46.465 15:19:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:46.465 15:19:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:46.465 15:19:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:46.465 15:19:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:46.465 15:19:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:46.465 15:19:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.465 15:19:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:46.724 [2024-11-20 15:19:32.947841] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:46.724 15:19:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.724 15:19:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:46.724 "name": "Existed_Raid", 00:11:46.724 "aliases": [ 00:11:46.724 "3ad1d86b-4d66-4202-aede-bfbb3391b2d4" 00:11:46.724 ], 00:11:46.724 "product_name": "Raid Volume", 00:11:46.724 "block_size": 512, 00:11:46.724 "num_blocks": 63488, 00:11:46.724 "uuid": "3ad1d86b-4d66-4202-aede-bfbb3391b2d4", 00:11:46.724 "assigned_rate_limits": { 00:11:46.724 "rw_ios_per_sec": 0, 00:11:46.724 "rw_mbytes_per_sec": 0, 00:11:46.724 "r_mbytes_per_sec": 0, 00:11:46.724 "w_mbytes_per_sec": 0 00:11:46.724 }, 00:11:46.724 "claimed": false, 00:11:46.724 "zoned": false, 00:11:46.724 "supported_io_types": { 00:11:46.724 "read": true, 00:11:46.724 "write": true, 00:11:46.724 "unmap": false, 00:11:46.724 "flush": false, 00:11:46.724 "reset": true, 00:11:46.724 "nvme_admin": false, 00:11:46.724 "nvme_io": false, 00:11:46.724 "nvme_io_md": false, 00:11:46.724 "write_zeroes": true, 00:11:46.724 "zcopy": false, 00:11:46.724 "get_zone_info": false, 00:11:46.724 "zone_management": false, 00:11:46.724 "zone_append": false, 00:11:46.724 "compare": false, 00:11:46.724 "compare_and_write": false, 00:11:46.724 "abort": false, 00:11:46.724 "seek_hole": false, 00:11:46.724 "seek_data": false, 00:11:46.724 "copy": false, 00:11:46.724 "nvme_iov_md": false 00:11:46.724 }, 00:11:46.724 "memory_domains": [ 00:11:46.724 { 00:11:46.724 "dma_device_id": "system", 00:11:46.724 "dma_device_type": 1 00:11:46.724 }, 00:11:46.724 { 00:11:46.724 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:46.724 "dma_device_type": 2 00:11:46.724 }, 00:11:46.724 { 00:11:46.724 "dma_device_id": "system", 00:11:46.724 "dma_device_type": 1 00:11:46.724 }, 00:11:46.724 { 00:11:46.724 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:46.724 "dma_device_type": 2 00:11:46.724 }, 00:11:46.724 { 00:11:46.724 "dma_device_id": "system", 00:11:46.724 "dma_device_type": 1 00:11:46.724 }, 00:11:46.724 { 00:11:46.724 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:46.724 "dma_device_type": 2 00:11:46.724 }, 00:11:46.724 { 00:11:46.724 "dma_device_id": "system", 00:11:46.724 "dma_device_type": 1 00:11:46.724 }, 00:11:46.724 { 00:11:46.724 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:46.724 "dma_device_type": 2 00:11:46.724 } 00:11:46.724 ], 00:11:46.724 "driver_specific": { 00:11:46.724 "raid": { 00:11:46.724 "uuid": "3ad1d86b-4d66-4202-aede-bfbb3391b2d4", 00:11:46.724 "strip_size_kb": 0, 00:11:46.724 "state": "online", 00:11:46.724 "raid_level": "raid1", 00:11:46.724 "superblock": true, 00:11:46.724 "num_base_bdevs": 4, 00:11:46.724 "num_base_bdevs_discovered": 4, 00:11:46.724 "num_base_bdevs_operational": 4, 00:11:46.724 "base_bdevs_list": [ 00:11:46.724 { 00:11:46.724 "name": "NewBaseBdev", 00:11:46.724 "uuid": "5ec1d418-0051-4b2f-b591-3de9c1f355ef", 00:11:46.724 "is_configured": true, 00:11:46.724 "data_offset": 2048, 00:11:46.724 "data_size": 63488 00:11:46.724 }, 00:11:46.724 { 00:11:46.724 "name": "BaseBdev2", 00:11:46.724 "uuid": "c2342769-8f7a-4c32-a4c1-84397001b58a", 00:11:46.724 "is_configured": true, 00:11:46.724 "data_offset": 2048, 00:11:46.724 "data_size": 63488 00:11:46.724 }, 00:11:46.724 { 00:11:46.724 "name": "BaseBdev3", 00:11:46.724 "uuid": "e651cc82-ef48-4811-9e58-30de206f8e47", 00:11:46.724 "is_configured": true, 00:11:46.724 "data_offset": 2048, 00:11:46.724 "data_size": 63488 00:11:46.724 }, 00:11:46.724 { 00:11:46.724 "name": "BaseBdev4", 00:11:46.724 "uuid": "9cfba1e5-8790-49c1-8a96-80eb56cd3c79", 00:11:46.724 "is_configured": true, 00:11:46.724 "data_offset": 2048, 00:11:46.724 "data_size": 63488 00:11:46.724 } 00:11:46.724 ] 00:11:46.724 } 00:11:46.725 } 00:11:46.725 }' 00:11:46.725 15:19:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:46.725 15:19:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:46.725 BaseBdev2 00:11:46.725 BaseBdev3 00:11:46.725 BaseBdev4' 00:11:46.725 15:19:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:46.725 15:19:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:46.725 15:19:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:46.725 15:19:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:46.725 15:19:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.725 15:19:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:46.725 15:19:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:46.725 15:19:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.725 15:19:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:46.725 15:19:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:46.725 15:19:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:46.725 15:19:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:46.725 15:19:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.725 15:19:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:46.725 15:19:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:46.725 15:19:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.725 15:19:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:46.725 15:19:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:46.725 15:19:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:46.725 15:19:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:46.725 15:19:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:46.725 15:19:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.725 15:19:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:46.725 15:19:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.984 15:19:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:46.984 15:19:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:46.984 15:19:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:46.984 15:19:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:46.984 15:19:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:46.984 15:19:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.984 15:19:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:46.984 15:19:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.984 15:19:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:46.984 15:19:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:46.984 15:19:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:46.984 15:19:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.984 15:19:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:46.984 [2024-11-20 15:19:33.251023] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:46.984 [2024-11-20 15:19:33.251072] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:46.984 [2024-11-20 15:19:33.251175] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:46.984 [2024-11-20 15:19:33.251467] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:46.984 [2024-11-20 15:19:33.251493] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:11:46.984 15:19:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.984 15:19:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 73671 00:11:46.984 15:19:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 73671 ']' 00:11:46.984 15:19:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 73671 00:11:46.984 15:19:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:11:46.984 15:19:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:46.984 15:19:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73671 00:11:46.984 killing process with pid 73671 00:11:46.984 15:19:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:46.984 15:19:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:46.984 15:19:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73671' 00:11:46.984 15:19:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 73671 00:11:46.984 [2024-11-20 15:19:33.304362] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:46.984 15:19:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 73671 00:11:47.243 [2024-11-20 15:19:33.705988] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:48.620 ************************************ 00:11:48.620 END TEST raid_state_function_test_sb 00:11:48.620 ************************************ 00:11:48.620 15:19:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:11:48.620 00:11:48.620 real 0m11.532s 00:11:48.620 user 0m18.322s 00:11:48.620 sys 0m2.300s 00:11:48.620 15:19:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:48.620 15:19:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.620 15:19:34 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:11:48.620 15:19:34 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:48.620 15:19:34 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:48.620 15:19:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:48.620 ************************************ 00:11:48.620 START TEST raid_superblock_test 00:11:48.620 ************************************ 00:11:48.620 15:19:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 4 00:11:48.620 15:19:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:11:48.621 15:19:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:11:48.621 15:19:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:11:48.621 15:19:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:11:48.621 15:19:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:11:48.621 15:19:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:11:48.621 15:19:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:11:48.621 15:19:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:11:48.621 15:19:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:11:48.621 15:19:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:11:48.621 15:19:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:11:48.621 15:19:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:11:48.621 15:19:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:11:48.621 15:19:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:11:48.621 15:19:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:11:48.621 15:19:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=74346 00:11:48.621 15:19:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 74346 00:11:48.621 15:19:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 74346 ']' 00:11:48.621 15:19:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:48.621 15:19:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:48.621 15:19:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:48.621 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:48.621 15:19:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:48.621 15:19:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:11:48.621 15:19:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.621 [2024-11-20 15:19:35.016894] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:11:48.621 [2024-11-20 15:19:35.017013] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74346 ] 00:11:48.880 [2024-11-20 15:19:35.197943] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:48.880 [2024-11-20 15:19:35.315383] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:49.139 [2024-11-20 15:19:35.529528] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:49.139 [2024-11-20 15:19:35.529577] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:49.706 15:19:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:49.706 15:19:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:11:49.706 15:19:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:11:49.706 15:19:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:49.706 15:19:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:11:49.706 15:19:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:11:49.706 15:19:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:11:49.706 15:19:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:49.706 15:19:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:49.706 15:19:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:49.706 15:19:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:11:49.706 15:19:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.706 15:19:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.706 malloc1 00:11:49.706 15:19:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.706 15:19:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:49.706 15:19:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.706 15:19:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.706 [2024-11-20 15:19:35.964204] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:49.706 [2024-11-20 15:19:35.964264] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:49.706 [2024-11-20 15:19:35.964296] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:49.706 [2024-11-20 15:19:35.964310] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:49.706 [2024-11-20 15:19:35.966616] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:49.706 [2024-11-20 15:19:35.966664] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:49.706 pt1 00:11:49.706 15:19:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.706 15:19:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:49.706 15:19:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:49.706 15:19:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:11:49.706 15:19:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:11:49.706 15:19:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:11:49.706 15:19:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:49.706 15:19:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:49.706 15:19:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:49.706 15:19:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:11:49.706 15:19:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.706 15:19:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.706 malloc2 00:11:49.706 15:19:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.706 15:19:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:49.706 15:19:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.706 15:19:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.706 [2024-11-20 15:19:36.021302] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:49.706 [2024-11-20 15:19:36.021357] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:49.706 [2024-11-20 15:19:36.021387] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:49.706 [2024-11-20 15:19:36.021398] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:49.706 [2024-11-20 15:19:36.023758] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:49.706 [2024-11-20 15:19:36.023787] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:49.706 pt2 00:11:49.706 15:19:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.706 15:19:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:49.706 15:19:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:49.706 15:19:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:11:49.706 15:19:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:11:49.706 15:19:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:11:49.706 15:19:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:49.706 15:19:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:49.706 15:19:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:49.706 15:19:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:11:49.706 15:19:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.706 15:19:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.706 malloc3 00:11:49.706 15:19:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.706 15:19:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:49.706 15:19:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.706 15:19:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.706 [2024-11-20 15:19:36.086600] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:49.706 [2024-11-20 15:19:36.086651] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:49.706 [2024-11-20 15:19:36.086685] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:11:49.706 [2024-11-20 15:19:36.086697] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:49.706 [2024-11-20 15:19:36.089009] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:49.706 [2024-11-20 15:19:36.089043] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:49.706 pt3 00:11:49.706 15:19:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.706 15:19:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:49.706 15:19:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:49.706 15:19:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:11:49.706 15:19:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:11:49.706 15:19:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:11:49.706 15:19:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:49.706 15:19:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:49.706 15:19:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:49.706 15:19:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:11:49.706 15:19:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.706 15:19:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.706 malloc4 00:11:49.707 15:19:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.707 15:19:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:49.707 15:19:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.707 15:19:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.707 [2024-11-20 15:19:36.142278] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:49.707 [2024-11-20 15:19:36.142333] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:49.707 [2024-11-20 15:19:36.142356] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:11:49.707 [2024-11-20 15:19:36.142367] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:49.707 [2024-11-20 15:19:36.144690] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:49.707 [2024-11-20 15:19:36.144723] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:49.707 pt4 00:11:49.707 15:19:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.707 15:19:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:49.707 15:19:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:49.707 15:19:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:11:49.707 15:19:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.707 15:19:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.707 [2024-11-20 15:19:36.154289] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:49.707 [2024-11-20 15:19:36.156336] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:49.707 [2024-11-20 15:19:36.156406] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:49.707 [2024-11-20 15:19:36.156468] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:49.707 [2024-11-20 15:19:36.156641] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:11:49.707 [2024-11-20 15:19:36.156671] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:49.707 [2024-11-20 15:19:36.156925] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:49.707 [2024-11-20 15:19:36.157104] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:11:49.707 [2024-11-20 15:19:36.157123] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:11:49.707 [2024-11-20 15:19:36.157263] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:49.707 15:19:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.707 15:19:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:11:49.707 15:19:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:49.707 15:19:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:49.707 15:19:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:49.707 15:19:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:49.707 15:19:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:49.707 15:19:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:49.707 15:19:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:49.707 15:19:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:49.707 15:19:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:49.707 15:19:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:49.707 15:19:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.707 15:19:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.707 15:19:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:49.965 15:19:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.965 15:19:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:49.965 "name": "raid_bdev1", 00:11:49.965 "uuid": "76f76497-b3b0-4c04-9a42-8d6addc7861c", 00:11:49.965 "strip_size_kb": 0, 00:11:49.965 "state": "online", 00:11:49.965 "raid_level": "raid1", 00:11:49.965 "superblock": true, 00:11:49.965 "num_base_bdevs": 4, 00:11:49.965 "num_base_bdevs_discovered": 4, 00:11:49.965 "num_base_bdevs_operational": 4, 00:11:49.965 "base_bdevs_list": [ 00:11:49.965 { 00:11:49.965 "name": "pt1", 00:11:49.965 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:49.965 "is_configured": true, 00:11:49.965 "data_offset": 2048, 00:11:49.965 "data_size": 63488 00:11:49.965 }, 00:11:49.965 { 00:11:49.965 "name": "pt2", 00:11:49.965 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:49.965 "is_configured": true, 00:11:49.965 "data_offset": 2048, 00:11:49.965 "data_size": 63488 00:11:49.965 }, 00:11:49.965 { 00:11:49.965 "name": "pt3", 00:11:49.965 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:49.965 "is_configured": true, 00:11:49.965 "data_offset": 2048, 00:11:49.965 "data_size": 63488 00:11:49.965 }, 00:11:49.965 { 00:11:49.965 "name": "pt4", 00:11:49.965 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:49.965 "is_configured": true, 00:11:49.965 "data_offset": 2048, 00:11:49.966 "data_size": 63488 00:11:49.966 } 00:11:49.966 ] 00:11:49.966 }' 00:11:49.966 15:19:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:49.966 15:19:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.225 15:19:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:11:50.225 15:19:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:50.225 15:19:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:50.225 15:19:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:50.225 15:19:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:50.225 15:19:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:50.225 15:19:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:50.225 15:19:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.225 15:19:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.225 15:19:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:50.225 [2024-11-20 15:19:36.594001] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:50.225 15:19:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.225 15:19:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:50.225 "name": "raid_bdev1", 00:11:50.225 "aliases": [ 00:11:50.225 "76f76497-b3b0-4c04-9a42-8d6addc7861c" 00:11:50.225 ], 00:11:50.225 "product_name": "Raid Volume", 00:11:50.225 "block_size": 512, 00:11:50.225 "num_blocks": 63488, 00:11:50.225 "uuid": "76f76497-b3b0-4c04-9a42-8d6addc7861c", 00:11:50.225 "assigned_rate_limits": { 00:11:50.225 "rw_ios_per_sec": 0, 00:11:50.225 "rw_mbytes_per_sec": 0, 00:11:50.225 "r_mbytes_per_sec": 0, 00:11:50.225 "w_mbytes_per_sec": 0 00:11:50.225 }, 00:11:50.225 "claimed": false, 00:11:50.225 "zoned": false, 00:11:50.225 "supported_io_types": { 00:11:50.225 "read": true, 00:11:50.225 "write": true, 00:11:50.225 "unmap": false, 00:11:50.225 "flush": false, 00:11:50.225 "reset": true, 00:11:50.225 "nvme_admin": false, 00:11:50.225 "nvme_io": false, 00:11:50.225 "nvme_io_md": false, 00:11:50.225 "write_zeroes": true, 00:11:50.225 "zcopy": false, 00:11:50.225 "get_zone_info": false, 00:11:50.225 "zone_management": false, 00:11:50.225 "zone_append": false, 00:11:50.225 "compare": false, 00:11:50.225 "compare_and_write": false, 00:11:50.225 "abort": false, 00:11:50.225 "seek_hole": false, 00:11:50.225 "seek_data": false, 00:11:50.225 "copy": false, 00:11:50.225 "nvme_iov_md": false 00:11:50.225 }, 00:11:50.225 "memory_domains": [ 00:11:50.225 { 00:11:50.225 "dma_device_id": "system", 00:11:50.225 "dma_device_type": 1 00:11:50.225 }, 00:11:50.225 { 00:11:50.225 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:50.225 "dma_device_type": 2 00:11:50.225 }, 00:11:50.225 { 00:11:50.225 "dma_device_id": "system", 00:11:50.225 "dma_device_type": 1 00:11:50.225 }, 00:11:50.225 { 00:11:50.225 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:50.225 "dma_device_type": 2 00:11:50.225 }, 00:11:50.225 { 00:11:50.225 "dma_device_id": "system", 00:11:50.225 "dma_device_type": 1 00:11:50.225 }, 00:11:50.225 { 00:11:50.225 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:50.225 "dma_device_type": 2 00:11:50.225 }, 00:11:50.225 { 00:11:50.225 "dma_device_id": "system", 00:11:50.225 "dma_device_type": 1 00:11:50.225 }, 00:11:50.225 { 00:11:50.225 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:50.225 "dma_device_type": 2 00:11:50.225 } 00:11:50.225 ], 00:11:50.225 "driver_specific": { 00:11:50.225 "raid": { 00:11:50.225 "uuid": "76f76497-b3b0-4c04-9a42-8d6addc7861c", 00:11:50.225 "strip_size_kb": 0, 00:11:50.225 "state": "online", 00:11:50.225 "raid_level": "raid1", 00:11:50.225 "superblock": true, 00:11:50.225 "num_base_bdevs": 4, 00:11:50.225 "num_base_bdevs_discovered": 4, 00:11:50.225 "num_base_bdevs_operational": 4, 00:11:50.225 "base_bdevs_list": [ 00:11:50.225 { 00:11:50.225 "name": "pt1", 00:11:50.225 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:50.225 "is_configured": true, 00:11:50.225 "data_offset": 2048, 00:11:50.225 "data_size": 63488 00:11:50.225 }, 00:11:50.225 { 00:11:50.225 "name": "pt2", 00:11:50.225 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:50.225 "is_configured": true, 00:11:50.225 "data_offset": 2048, 00:11:50.225 "data_size": 63488 00:11:50.225 }, 00:11:50.225 { 00:11:50.225 "name": "pt3", 00:11:50.225 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:50.225 "is_configured": true, 00:11:50.225 "data_offset": 2048, 00:11:50.225 "data_size": 63488 00:11:50.225 }, 00:11:50.225 { 00:11:50.225 "name": "pt4", 00:11:50.225 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:50.225 "is_configured": true, 00:11:50.225 "data_offset": 2048, 00:11:50.225 "data_size": 63488 00:11:50.225 } 00:11:50.225 ] 00:11:50.225 } 00:11:50.225 } 00:11:50.225 }' 00:11:50.225 15:19:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:50.225 15:19:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:50.225 pt2 00:11:50.225 pt3 00:11:50.225 pt4' 00:11:50.225 15:19:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:50.225 15:19:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:50.225 15:19:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:50.485 15:19:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:50.485 15:19:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:50.485 15:19:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.485 15:19:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.485 15:19:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.485 15:19:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:50.485 15:19:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:50.485 15:19:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:50.485 15:19:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:50.485 15:19:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:50.485 15:19:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.485 15:19:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.485 15:19:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.485 15:19:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:50.485 15:19:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:50.485 15:19:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:50.485 15:19:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:50.485 15:19:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.485 15:19:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.485 15:19:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:50.485 15:19:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.485 15:19:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:50.485 15:19:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:50.485 15:19:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:50.485 15:19:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:50.485 15:19:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:11:50.485 15:19:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.485 15:19:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.485 15:19:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.485 15:19:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:50.485 15:19:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:50.485 15:19:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:11:50.485 15:19:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:50.485 15:19:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.485 15:19:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.485 [2024-11-20 15:19:36.885506] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:50.485 15:19:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.485 15:19:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=76f76497-b3b0-4c04-9a42-8d6addc7861c 00:11:50.485 15:19:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 76f76497-b3b0-4c04-9a42-8d6addc7861c ']' 00:11:50.485 15:19:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:50.485 15:19:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.485 15:19:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.485 [2024-11-20 15:19:36.929165] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:50.485 [2024-11-20 15:19:36.929195] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:50.485 [2024-11-20 15:19:36.929266] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:50.485 [2024-11-20 15:19:36.929345] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:50.485 [2024-11-20 15:19:36.929362] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:11:50.485 15:19:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.485 15:19:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:50.485 15:19:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.485 15:19:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:11:50.485 15:19:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.485 15:19:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.745 15:19:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:11:50.745 15:19:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:11:50.745 15:19:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:50.745 15:19:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:11:50.745 15:19:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.745 15:19:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.745 15:19:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.745 15:19:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:50.745 15:19:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:11:50.745 15:19:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.745 15:19:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.745 15:19:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.745 15:19:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:50.745 15:19:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:11:50.745 15:19:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.745 15:19:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.745 15:19:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.745 15:19:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:50.745 15:19:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:11:50.745 15:19:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.745 15:19:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.745 15:19:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.745 15:19:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:11:50.745 15:19:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:11:50.745 15:19:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.745 15:19:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.745 15:19:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.745 15:19:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:11:50.745 15:19:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:50.745 15:19:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:11:50.745 15:19:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:50.745 15:19:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:11:50.745 15:19:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:50.745 15:19:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:11:50.745 15:19:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:50.745 15:19:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:50.745 15:19:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.745 15:19:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.745 [2024-11-20 15:19:37.080959] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:11:50.745 [2024-11-20 15:19:37.083034] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:11:50.745 [2024-11-20 15:19:37.083087] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:11:50.745 [2024-11-20 15:19:37.083124] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:11:50.745 [2024-11-20 15:19:37.083172] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:11:50.745 [2024-11-20 15:19:37.083218] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:11:50.745 [2024-11-20 15:19:37.083239] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:11:50.745 [2024-11-20 15:19:37.083260] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:11:50.745 [2024-11-20 15:19:37.083276] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:50.745 [2024-11-20 15:19:37.083288] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:11:50.745 request: 00:11:50.745 { 00:11:50.745 "name": "raid_bdev1", 00:11:50.745 "raid_level": "raid1", 00:11:50.745 "base_bdevs": [ 00:11:50.745 "malloc1", 00:11:50.745 "malloc2", 00:11:50.745 "malloc3", 00:11:50.745 "malloc4" 00:11:50.745 ], 00:11:50.745 "superblock": false, 00:11:50.745 "method": "bdev_raid_create", 00:11:50.745 "req_id": 1 00:11:50.745 } 00:11:50.745 Got JSON-RPC error response 00:11:50.745 response: 00:11:50.745 { 00:11:50.745 "code": -17, 00:11:50.745 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:11:50.745 } 00:11:50.745 15:19:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:11:50.745 15:19:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:11:50.745 15:19:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:50.745 15:19:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:50.745 15:19:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:50.745 15:19:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:50.745 15:19:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.745 15:19:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.745 15:19:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:11:50.745 15:19:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.745 15:19:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:11:50.745 15:19:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:11:50.745 15:19:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:50.745 15:19:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.745 15:19:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.745 [2024-11-20 15:19:37.140862] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:50.745 [2024-11-20 15:19:37.140914] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:50.745 [2024-11-20 15:19:37.140931] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:50.745 [2024-11-20 15:19:37.140945] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:50.745 [2024-11-20 15:19:37.143294] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:50.745 [2024-11-20 15:19:37.143338] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:50.745 [2024-11-20 15:19:37.143416] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:50.745 [2024-11-20 15:19:37.143499] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:50.745 pt1 00:11:50.745 15:19:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.745 15:19:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:11:50.745 15:19:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:50.745 15:19:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:50.745 15:19:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:50.745 15:19:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:50.745 15:19:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:50.745 15:19:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:50.745 15:19:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:50.745 15:19:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:50.745 15:19:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:50.746 15:19:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:50.746 15:19:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.746 15:19:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.746 15:19:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:50.746 15:19:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.746 15:19:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:50.746 "name": "raid_bdev1", 00:11:50.746 "uuid": "76f76497-b3b0-4c04-9a42-8d6addc7861c", 00:11:50.746 "strip_size_kb": 0, 00:11:50.746 "state": "configuring", 00:11:50.746 "raid_level": "raid1", 00:11:50.746 "superblock": true, 00:11:50.746 "num_base_bdevs": 4, 00:11:50.746 "num_base_bdevs_discovered": 1, 00:11:50.746 "num_base_bdevs_operational": 4, 00:11:50.746 "base_bdevs_list": [ 00:11:50.746 { 00:11:50.746 "name": "pt1", 00:11:50.746 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:50.746 "is_configured": true, 00:11:50.746 "data_offset": 2048, 00:11:50.746 "data_size": 63488 00:11:50.746 }, 00:11:50.746 { 00:11:50.746 "name": null, 00:11:50.746 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:50.746 "is_configured": false, 00:11:50.746 "data_offset": 2048, 00:11:50.746 "data_size": 63488 00:11:50.746 }, 00:11:50.746 { 00:11:50.746 "name": null, 00:11:50.746 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:50.746 "is_configured": false, 00:11:50.746 "data_offset": 2048, 00:11:50.746 "data_size": 63488 00:11:50.746 }, 00:11:50.746 { 00:11:50.746 "name": null, 00:11:50.746 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:50.746 "is_configured": false, 00:11:50.746 "data_offset": 2048, 00:11:50.746 "data_size": 63488 00:11:50.746 } 00:11:50.746 ] 00:11:50.746 }' 00:11:50.746 15:19:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:50.746 15:19:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.314 15:19:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:11:51.315 15:19:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:51.315 15:19:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.315 15:19:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.315 [2024-11-20 15:19:37.552589] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:51.315 [2024-11-20 15:19:37.552674] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:51.315 [2024-11-20 15:19:37.552697] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:11:51.315 [2024-11-20 15:19:37.552711] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:51.315 [2024-11-20 15:19:37.553148] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:51.315 [2024-11-20 15:19:37.553170] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:51.315 [2024-11-20 15:19:37.553252] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:51.315 [2024-11-20 15:19:37.553279] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:51.315 pt2 00:11:51.315 15:19:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.315 15:19:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:11:51.315 15:19:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.315 15:19:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.315 [2024-11-20 15:19:37.560592] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:11:51.315 15:19:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.315 15:19:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:11:51.315 15:19:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:51.315 15:19:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:51.315 15:19:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:51.315 15:19:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:51.315 15:19:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:51.315 15:19:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:51.315 15:19:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:51.315 15:19:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:51.315 15:19:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:51.315 15:19:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:51.315 15:19:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:51.315 15:19:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.315 15:19:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.315 15:19:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.315 15:19:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:51.315 "name": "raid_bdev1", 00:11:51.315 "uuid": "76f76497-b3b0-4c04-9a42-8d6addc7861c", 00:11:51.315 "strip_size_kb": 0, 00:11:51.315 "state": "configuring", 00:11:51.315 "raid_level": "raid1", 00:11:51.315 "superblock": true, 00:11:51.315 "num_base_bdevs": 4, 00:11:51.315 "num_base_bdevs_discovered": 1, 00:11:51.315 "num_base_bdevs_operational": 4, 00:11:51.315 "base_bdevs_list": [ 00:11:51.315 { 00:11:51.315 "name": "pt1", 00:11:51.315 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:51.315 "is_configured": true, 00:11:51.315 "data_offset": 2048, 00:11:51.315 "data_size": 63488 00:11:51.315 }, 00:11:51.315 { 00:11:51.315 "name": null, 00:11:51.315 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:51.315 "is_configured": false, 00:11:51.315 "data_offset": 0, 00:11:51.315 "data_size": 63488 00:11:51.315 }, 00:11:51.315 { 00:11:51.315 "name": null, 00:11:51.315 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:51.315 "is_configured": false, 00:11:51.315 "data_offset": 2048, 00:11:51.315 "data_size": 63488 00:11:51.315 }, 00:11:51.315 { 00:11:51.315 "name": null, 00:11:51.315 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:51.315 "is_configured": false, 00:11:51.315 "data_offset": 2048, 00:11:51.315 "data_size": 63488 00:11:51.315 } 00:11:51.315 ] 00:11:51.315 }' 00:11:51.315 15:19:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:51.315 15:19:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.574 15:19:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:11:51.574 15:19:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:51.574 15:19:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:51.574 15:19:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.574 15:19:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.574 [2024-11-20 15:19:37.975970] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:51.574 [2024-11-20 15:19:37.976035] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:51.574 [2024-11-20 15:19:37.976058] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:11:51.574 [2024-11-20 15:19:37.976070] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:51.574 [2024-11-20 15:19:37.976506] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:51.574 [2024-11-20 15:19:37.976525] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:51.574 [2024-11-20 15:19:37.976607] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:51.574 [2024-11-20 15:19:37.976629] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:51.574 pt2 00:11:51.574 15:19:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.574 15:19:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:51.574 15:19:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:51.574 15:19:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:51.574 15:19:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.574 15:19:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.574 [2024-11-20 15:19:37.983952] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:51.574 [2024-11-20 15:19:37.984006] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:51.574 [2024-11-20 15:19:37.984027] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:11:51.575 [2024-11-20 15:19:37.984038] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:51.575 [2024-11-20 15:19:37.984411] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:51.575 [2024-11-20 15:19:37.984429] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:51.575 [2024-11-20 15:19:37.984494] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:11:51.575 [2024-11-20 15:19:37.984512] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:51.575 pt3 00:11:51.575 15:19:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.575 15:19:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:51.575 15:19:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:51.575 15:19:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:51.575 15:19:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.575 15:19:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.575 [2024-11-20 15:19:37.991908] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:51.575 [2024-11-20 15:19:37.991956] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:51.575 [2024-11-20 15:19:37.991974] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:11:51.575 [2024-11-20 15:19:37.991984] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:51.575 [2024-11-20 15:19:37.992353] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:51.575 [2024-11-20 15:19:37.992370] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:51.575 [2024-11-20 15:19:37.992434] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:11:51.575 [2024-11-20 15:19:37.992459] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:51.575 [2024-11-20 15:19:37.992582] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:51.575 [2024-11-20 15:19:37.992592] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:51.575 [2024-11-20 15:19:37.992844] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:11:51.575 [2024-11-20 15:19:37.992979] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:51.575 [2024-11-20 15:19:37.992993] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:11:51.575 [2024-11-20 15:19:37.993115] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:51.575 pt4 00:11:51.575 15:19:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.575 15:19:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:51.575 15:19:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:51.575 15:19:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:11:51.575 15:19:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:51.575 15:19:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:51.575 15:19:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:51.575 15:19:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:51.575 15:19:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:51.575 15:19:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:51.575 15:19:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:51.575 15:19:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:51.575 15:19:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:51.575 15:19:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:51.575 15:19:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:51.575 15:19:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.575 15:19:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.575 15:19:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.575 15:19:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:51.575 "name": "raid_bdev1", 00:11:51.575 "uuid": "76f76497-b3b0-4c04-9a42-8d6addc7861c", 00:11:51.575 "strip_size_kb": 0, 00:11:51.575 "state": "online", 00:11:51.575 "raid_level": "raid1", 00:11:51.575 "superblock": true, 00:11:51.575 "num_base_bdevs": 4, 00:11:51.575 "num_base_bdevs_discovered": 4, 00:11:51.575 "num_base_bdevs_operational": 4, 00:11:51.575 "base_bdevs_list": [ 00:11:51.575 { 00:11:51.575 "name": "pt1", 00:11:51.575 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:51.575 "is_configured": true, 00:11:51.575 "data_offset": 2048, 00:11:51.575 "data_size": 63488 00:11:51.575 }, 00:11:51.575 { 00:11:51.575 "name": "pt2", 00:11:51.575 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:51.575 "is_configured": true, 00:11:51.575 "data_offset": 2048, 00:11:51.575 "data_size": 63488 00:11:51.575 }, 00:11:51.575 { 00:11:51.575 "name": "pt3", 00:11:51.575 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:51.575 "is_configured": true, 00:11:51.575 "data_offset": 2048, 00:11:51.575 "data_size": 63488 00:11:51.575 }, 00:11:51.575 { 00:11:51.575 "name": "pt4", 00:11:51.575 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:51.575 "is_configured": true, 00:11:51.575 "data_offset": 2048, 00:11:51.575 "data_size": 63488 00:11:51.575 } 00:11:51.575 ] 00:11:51.575 }' 00:11:51.575 15:19:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:51.575 15:19:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.143 15:19:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:11:52.143 15:19:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:52.143 15:19:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:52.143 15:19:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:52.143 15:19:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:52.143 15:19:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:52.143 15:19:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:52.143 15:19:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:52.143 15:19:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.143 15:19:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.143 [2024-11-20 15:19:38.408127] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:52.143 15:19:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.143 15:19:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:52.143 "name": "raid_bdev1", 00:11:52.143 "aliases": [ 00:11:52.143 "76f76497-b3b0-4c04-9a42-8d6addc7861c" 00:11:52.143 ], 00:11:52.143 "product_name": "Raid Volume", 00:11:52.143 "block_size": 512, 00:11:52.143 "num_blocks": 63488, 00:11:52.143 "uuid": "76f76497-b3b0-4c04-9a42-8d6addc7861c", 00:11:52.143 "assigned_rate_limits": { 00:11:52.143 "rw_ios_per_sec": 0, 00:11:52.143 "rw_mbytes_per_sec": 0, 00:11:52.143 "r_mbytes_per_sec": 0, 00:11:52.143 "w_mbytes_per_sec": 0 00:11:52.143 }, 00:11:52.143 "claimed": false, 00:11:52.143 "zoned": false, 00:11:52.143 "supported_io_types": { 00:11:52.143 "read": true, 00:11:52.143 "write": true, 00:11:52.143 "unmap": false, 00:11:52.143 "flush": false, 00:11:52.143 "reset": true, 00:11:52.143 "nvme_admin": false, 00:11:52.143 "nvme_io": false, 00:11:52.143 "nvme_io_md": false, 00:11:52.143 "write_zeroes": true, 00:11:52.143 "zcopy": false, 00:11:52.143 "get_zone_info": false, 00:11:52.143 "zone_management": false, 00:11:52.143 "zone_append": false, 00:11:52.143 "compare": false, 00:11:52.143 "compare_and_write": false, 00:11:52.143 "abort": false, 00:11:52.143 "seek_hole": false, 00:11:52.143 "seek_data": false, 00:11:52.143 "copy": false, 00:11:52.143 "nvme_iov_md": false 00:11:52.143 }, 00:11:52.143 "memory_domains": [ 00:11:52.143 { 00:11:52.143 "dma_device_id": "system", 00:11:52.143 "dma_device_type": 1 00:11:52.143 }, 00:11:52.143 { 00:11:52.143 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:52.143 "dma_device_type": 2 00:11:52.143 }, 00:11:52.143 { 00:11:52.143 "dma_device_id": "system", 00:11:52.143 "dma_device_type": 1 00:11:52.143 }, 00:11:52.143 { 00:11:52.143 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:52.143 "dma_device_type": 2 00:11:52.143 }, 00:11:52.143 { 00:11:52.143 "dma_device_id": "system", 00:11:52.143 "dma_device_type": 1 00:11:52.143 }, 00:11:52.143 { 00:11:52.143 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:52.143 "dma_device_type": 2 00:11:52.143 }, 00:11:52.143 { 00:11:52.143 "dma_device_id": "system", 00:11:52.143 "dma_device_type": 1 00:11:52.143 }, 00:11:52.143 { 00:11:52.143 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:52.143 "dma_device_type": 2 00:11:52.143 } 00:11:52.143 ], 00:11:52.143 "driver_specific": { 00:11:52.143 "raid": { 00:11:52.143 "uuid": "76f76497-b3b0-4c04-9a42-8d6addc7861c", 00:11:52.143 "strip_size_kb": 0, 00:11:52.143 "state": "online", 00:11:52.143 "raid_level": "raid1", 00:11:52.143 "superblock": true, 00:11:52.143 "num_base_bdevs": 4, 00:11:52.143 "num_base_bdevs_discovered": 4, 00:11:52.143 "num_base_bdevs_operational": 4, 00:11:52.143 "base_bdevs_list": [ 00:11:52.143 { 00:11:52.143 "name": "pt1", 00:11:52.143 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:52.143 "is_configured": true, 00:11:52.143 "data_offset": 2048, 00:11:52.143 "data_size": 63488 00:11:52.143 }, 00:11:52.143 { 00:11:52.143 "name": "pt2", 00:11:52.143 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:52.143 "is_configured": true, 00:11:52.143 "data_offset": 2048, 00:11:52.143 "data_size": 63488 00:11:52.143 }, 00:11:52.143 { 00:11:52.143 "name": "pt3", 00:11:52.143 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:52.143 "is_configured": true, 00:11:52.143 "data_offset": 2048, 00:11:52.143 "data_size": 63488 00:11:52.143 }, 00:11:52.143 { 00:11:52.143 "name": "pt4", 00:11:52.143 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:52.143 "is_configured": true, 00:11:52.143 "data_offset": 2048, 00:11:52.143 "data_size": 63488 00:11:52.143 } 00:11:52.143 ] 00:11:52.143 } 00:11:52.143 } 00:11:52.143 }' 00:11:52.143 15:19:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:52.143 15:19:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:52.143 pt2 00:11:52.143 pt3 00:11:52.143 pt4' 00:11:52.143 15:19:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:52.143 15:19:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:52.143 15:19:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:52.143 15:19:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:52.143 15:19:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:52.143 15:19:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.143 15:19:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.144 15:19:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.144 15:19:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:52.144 15:19:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:52.144 15:19:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:52.144 15:19:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:52.144 15:19:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:52.144 15:19:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.144 15:19:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.144 15:19:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.144 15:19:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:52.144 15:19:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:52.144 15:19:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:52.144 15:19:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:52.144 15:19:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.144 15:19:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.144 15:19:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:52.144 15:19:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.404 15:19:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:52.404 15:19:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:52.404 15:19:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:52.404 15:19:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:11:52.404 15:19:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:52.404 15:19:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.404 15:19:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.404 15:19:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.404 15:19:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:52.404 15:19:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:52.404 15:19:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:52.404 15:19:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:11:52.404 15:19:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.404 15:19:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.404 [2024-11-20 15:19:38.691669] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:52.404 15:19:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.404 15:19:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 76f76497-b3b0-4c04-9a42-8d6addc7861c '!=' 76f76497-b3b0-4c04-9a42-8d6addc7861c ']' 00:11:52.404 15:19:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:11:52.404 15:19:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:52.404 15:19:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:11:52.404 15:19:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:11:52.404 15:19:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.404 15:19:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.404 [2024-11-20 15:19:38.739357] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:11:52.404 15:19:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.404 15:19:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:52.404 15:19:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:52.404 15:19:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:52.404 15:19:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:52.404 15:19:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:52.404 15:19:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:52.404 15:19:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:52.404 15:19:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:52.404 15:19:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:52.404 15:19:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:52.404 15:19:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:52.404 15:19:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:52.404 15:19:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.404 15:19:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.404 15:19:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.404 15:19:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:52.404 "name": "raid_bdev1", 00:11:52.404 "uuid": "76f76497-b3b0-4c04-9a42-8d6addc7861c", 00:11:52.404 "strip_size_kb": 0, 00:11:52.404 "state": "online", 00:11:52.404 "raid_level": "raid1", 00:11:52.404 "superblock": true, 00:11:52.404 "num_base_bdevs": 4, 00:11:52.404 "num_base_bdevs_discovered": 3, 00:11:52.404 "num_base_bdevs_operational": 3, 00:11:52.404 "base_bdevs_list": [ 00:11:52.404 { 00:11:52.404 "name": null, 00:11:52.404 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:52.404 "is_configured": false, 00:11:52.404 "data_offset": 0, 00:11:52.404 "data_size": 63488 00:11:52.404 }, 00:11:52.404 { 00:11:52.404 "name": "pt2", 00:11:52.404 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:52.404 "is_configured": true, 00:11:52.404 "data_offset": 2048, 00:11:52.404 "data_size": 63488 00:11:52.404 }, 00:11:52.404 { 00:11:52.404 "name": "pt3", 00:11:52.404 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:52.404 "is_configured": true, 00:11:52.404 "data_offset": 2048, 00:11:52.404 "data_size": 63488 00:11:52.404 }, 00:11:52.404 { 00:11:52.404 "name": "pt4", 00:11:52.404 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:52.404 "is_configured": true, 00:11:52.404 "data_offset": 2048, 00:11:52.404 "data_size": 63488 00:11:52.404 } 00:11:52.404 ] 00:11:52.404 }' 00:11:52.404 15:19:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:52.404 15:19:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.972 15:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:52.972 15:19:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.972 15:19:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.972 [2024-11-20 15:19:39.158921] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:52.972 [2024-11-20 15:19:39.158955] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:52.972 [2024-11-20 15:19:39.159029] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:52.972 [2024-11-20 15:19:39.159105] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:52.972 [2024-11-20 15:19:39.159116] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:11:52.972 15:19:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.972 15:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:11:52.972 15:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:52.972 15:19:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.972 15:19:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.972 15:19:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.972 15:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:11:52.972 15:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:11:52.972 15:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:11:52.972 15:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:11:52.972 15:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:11:52.972 15:19:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.972 15:19:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.972 15:19:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.972 15:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:11:52.972 15:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:11:52.972 15:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:11:52.972 15:19:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.972 15:19:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.972 15:19:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.972 15:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:11:52.972 15:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:11:52.972 15:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:11:52.972 15:19:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.972 15:19:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.972 15:19:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.972 15:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:11:52.972 15:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:11:52.972 15:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:11:52.972 15:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:11:52.972 15:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:52.972 15:19:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.972 15:19:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.972 [2024-11-20 15:19:39.246877] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:52.972 [2024-11-20 15:19:39.246938] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:52.972 [2024-11-20 15:19:39.246958] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:11:52.972 [2024-11-20 15:19:39.246970] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:52.972 [2024-11-20 15:19:39.249401] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:52.972 [2024-11-20 15:19:39.249441] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:52.972 [2024-11-20 15:19:39.249521] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:52.972 [2024-11-20 15:19:39.249562] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:52.972 pt2 00:11:52.972 15:19:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.973 15:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:11:52.973 15:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:52.973 15:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:52.973 15:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:52.973 15:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:52.973 15:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:52.973 15:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:52.973 15:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:52.973 15:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:52.973 15:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:52.973 15:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:52.973 15:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:52.973 15:19:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.973 15:19:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.973 15:19:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.973 15:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:52.973 "name": "raid_bdev1", 00:11:52.973 "uuid": "76f76497-b3b0-4c04-9a42-8d6addc7861c", 00:11:52.973 "strip_size_kb": 0, 00:11:52.973 "state": "configuring", 00:11:52.973 "raid_level": "raid1", 00:11:52.973 "superblock": true, 00:11:52.973 "num_base_bdevs": 4, 00:11:52.973 "num_base_bdevs_discovered": 1, 00:11:52.973 "num_base_bdevs_operational": 3, 00:11:52.973 "base_bdevs_list": [ 00:11:52.973 { 00:11:52.973 "name": null, 00:11:52.973 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:52.973 "is_configured": false, 00:11:52.973 "data_offset": 2048, 00:11:52.973 "data_size": 63488 00:11:52.973 }, 00:11:52.973 { 00:11:52.973 "name": "pt2", 00:11:52.973 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:52.973 "is_configured": true, 00:11:52.973 "data_offset": 2048, 00:11:52.973 "data_size": 63488 00:11:52.973 }, 00:11:52.973 { 00:11:52.973 "name": null, 00:11:52.973 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:52.973 "is_configured": false, 00:11:52.973 "data_offset": 2048, 00:11:52.973 "data_size": 63488 00:11:52.973 }, 00:11:52.973 { 00:11:52.973 "name": null, 00:11:52.973 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:52.973 "is_configured": false, 00:11:52.973 "data_offset": 2048, 00:11:52.973 "data_size": 63488 00:11:52.973 } 00:11:52.973 ] 00:11:52.973 }' 00:11:52.973 15:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:52.973 15:19:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.232 15:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:11:53.232 15:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:11:53.232 15:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:53.232 15:19:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.232 15:19:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.232 [2024-11-20 15:19:39.670916] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:53.232 [2024-11-20 15:19:39.670988] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:53.232 [2024-11-20 15:19:39.671011] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:11:53.232 [2024-11-20 15:19:39.671022] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:53.232 [2024-11-20 15:19:39.671465] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:53.232 [2024-11-20 15:19:39.671500] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:53.232 [2024-11-20 15:19:39.671591] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:11:53.232 [2024-11-20 15:19:39.671615] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:53.232 pt3 00:11:53.232 15:19:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.232 15:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:11:53.232 15:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:53.232 15:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:53.232 15:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:53.232 15:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:53.232 15:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:53.232 15:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:53.232 15:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:53.232 15:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:53.232 15:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:53.232 15:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:53.232 15:19:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.232 15:19:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.232 15:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:53.232 15:19:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.491 15:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:53.491 "name": "raid_bdev1", 00:11:53.491 "uuid": "76f76497-b3b0-4c04-9a42-8d6addc7861c", 00:11:53.491 "strip_size_kb": 0, 00:11:53.491 "state": "configuring", 00:11:53.491 "raid_level": "raid1", 00:11:53.491 "superblock": true, 00:11:53.491 "num_base_bdevs": 4, 00:11:53.491 "num_base_bdevs_discovered": 2, 00:11:53.491 "num_base_bdevs_operational": 3, 00:11:53.491 "base_bdevs_list": [ 00:11:53.491 { 00:11:53.491 "name": null, 00:11:53.491 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:53.491 "is_configured": false, 00:11:53.491 "data_offset": 2048, 00:11:53.491 "data_size": 63488 00:11:53.491 }, 00:11:53.491 { 00:11:53.491 "name": "pt2", 00:11:53.491 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:53.491 "is_configured": true, 00:11:53.491 "data_offset": 2048, 00:11:53.491 "data_size": 63488 00:11:53.491 }, 00:11:53.491 { 00:11:53.491 "name": "pt3", 00:11:53.491 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:53.491 "is_configured": true, 00:11:53.491 "data_offset": 2048, 00:11:53.491 "data_size": 63488 00:11:53.491 }, 00:11:53.491 { 00:11:53.491 "name": null, 00:11:53.491 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:53.491 "is_configured": false, 00:11:53.491 "data_offset": 2048, 00:11:53.491 "data_size": 63488 00:11:53.491 } 00:11:53.491 ] 00:11:53.491 }' 00:11:53.491 15:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:53.491 15:19:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.750 15:19:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:11:53.750 15:19:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:11:53.750 15:19:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:11:53.750 15:19:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:53.750 15:19:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.750 15:19:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.750 [2024-11-20 15:19:40.094697] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:53.750 [2024-11-20 15:19:40.094781] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:53.750 [2024-11-20 15:19:40.094810] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:11:53.750 [2024-11-20 15:19:40.094822] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:53.750 [2024-11-20 15:19:40.095263] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:53.750 [2024-11-20 15:19:40.095304] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:53.750 [2024-11-20 15:19:40.095393] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:11:53.750 [2024-11-20 15:19:40.095416] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:53.750 [2024-11-20 15:19:40.095547] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:53.750 [2024-11-20 15:19:40.095561] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:53.750 [2024-11-20 15:19:40.095824] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:11:53.750 [2024-11-20 15:19:40.095974] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:53.750 [2024-11-20 15:19:40.095988] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:11:53.750 [2024-11-20 15:19:40.096129] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:53.750 pt4 00:11:53.750 15:19:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.750 15:19:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:53.750 15:19:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:53.750 15:19:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:53.750 15:19:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:53.750 15:19:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:53.750 15:19:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:53.750 15:19:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:53.750 15:19:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:53.750 15:19:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:53.750 15:19:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:53.750 15:19:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:53.750 15:19:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:53.750 15:19:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.750 15:19:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.750 15:19:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.750 15:19:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:53.750 "name": "raid_bdev1", 00:11:53.750 "uuid": "76f76497-b3b0-4c04-9a42-8d6addc7861c", 00:11:53.750 "strip_size_kb": 0, 00:11:53.750 "state": "online", 00:11:53.750 "raid_level": "raid1", 00:11:53.750 "superblock": true, 00:11:53.750 "num_base_bdevs": 4, 00:11:53.750 "num_base_bdevs_discovered": 3, 00:11:53.750 "num_base_bdevs_operational": 3, 00:11:53.750 "base_bdevs_list": [ 00:11:53.750 { 00:11:53.750 "name": null, 00:11:53.750 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:53.750 "is_configured": false, 00:11:53.750 "data_offset": 2048, 00:11:53.750 "data_size": 63488 00:11:53.750 }, 00:11:53.750 { 00:11:53.750 "name": "pt2", 00:11:53.750 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:53.750 "is_configured": true, 00:11:53.750 "data_offset": 2048, 00:11:53.750 "data_size": 63488 00:11:53.750 }, 00:11:53.750 { 00:11:53.750 "name": "pt3", 00:11:53.750 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:53.750 "is_configured": true, 00:11:53.750 "data_offset": 2048, 00:11:53.750 "data_size": 63488 00:11:53.750 }, 00:11:53.750 { 00:11:53.750 "name": "pt4", 00:11:53.750 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:53.750 "is_configured": true, 00:11:53.750 "data_offset": 2048, 00:11:53.750 "data_size": 63488 00:11:53.750 } 00:11:53.750 ] 00:11:53.750 }' 00:11:53.750 15:19:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:53.750 15:19:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.318 15:19:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:54.318 15:19:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.318 15:19:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.318 [2024-11-20 15:19:40.526018] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:54.318 [2024-11-20 15:19:40.526052] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:54.318 [2024-11-20 15:19:40.526130] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:54.318 [2024-11-20 15:19:40.526200] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:54.318 [2024-11-20 15:19:40.526214] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:11:54.318 15:19:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.318 15:19:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:54.318 15:19:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.318 15:19:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.318 15:19:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:11:54.318 15:19:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.318 15:19:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:11:54.318 15:19:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:11:54.318 15:19:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:11:54.318 15:19:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:11:54.318 15:19:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:11:54.318 15:19:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.318 15:19:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.318 15:19:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.318 15:19:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:54.318 15:19:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.318 15:19:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.318 [2024-11-20 15:19:40.593910] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:54.318 [2024-11-20 15:19:40.593979] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:54.318 [2024-11-20 15:19:40.593999] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:11:54.318 [2024-11-20 15:19:40.594015] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:54.318 [2024-11-20 15:19:40.596435] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:54.318 [2024-11-20 15:19:40.596484] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:54.318 [2024-11-20 15:19:40.596565] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:54.318 [2024-11-20 15:19:40.596620] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:54.318 [2024-11-20 15:19:40.596771] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:11:54.318 [2024-11-20 15:19:40.596788] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:54.318 [2024-11-20 15:19:40.596803] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:11:54.318 [2024-11-20 15:19:40.596872] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:54.318 [2024-11-20 15:19:40.596969] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:54.318 pt1 00:11:54.318 15:19:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.318 15:19:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:11:54.318 15:19:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:11:54.318 15:19:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:54.318 15:19:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:54.318 15:19:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:54.318 15:19:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:54.318 15:19:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:54.318 15:19:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:54.319 15:19:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:54.319 15:19:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:54.319 15:19:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:54.319 15:19:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:54.319 15:19:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:54.319 15:19:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.319 15:19:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.319 15:19:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.319 15:19:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:54.319 "name": "raid_bdev1", 00:11:54.319 "uuid": "76f76497-b3b0-4c04-9a42-8d6addc7861c", 00:11:54.319 "strip_size_kb": 0, 00:11:54.319 "state": "configuring", 00:11:54.319 "raid_level": "raid1", 00:11:54.319 "superblock": true, 00:11:54.319 "num_base_bdevs": 4, 00:11:54.319 "num_base_bdevs_discovered": 2, 00:11:54.319 "num_base_bdevs_operational": 3, 00:11:54.319 "base_bdevs_list": [ 00:11:54.319 { 00:11:54.319 "name": null, 00:11:54.319 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:54.319 "is_configured": false, 00:11:54.319 "data_offset": 2048, 00:11:54.319 "data_size": 63488 00:11:54.319 }, 00:11:54.319 { 00:11:54.319 "name": "pt2", 00:11:54.319 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:54.319 "is_configured": true, 00:11:54.319 "data_offset": 2048, 00:11:54.319 "data_size": 63488 00:11:54.319 }, 00:11:54.319 { 00:11:54.319 "name": "pt3", 00:11:54.319 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:54.319 "is_configured": true, 00:11:54.319 "data_offset": 2048, 00:11:54.319 "data_size": 63488 00:11:54.319 }, 00:11:54.319 { 00:11:54.319 "name": null, 00:11:54.319 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:54.319 "is_configured": false, 00:11:54.319 "data_offset": 2048, 00:11:54.319 "data_size": 63488 00:11:54.319 } 00:11:54.319 ] 00:11:54.319 }' 00:11:54.319 15:19:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:54.319 15:19:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.578 15:19:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:11:54.578 15:19:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.578 15:19:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:11:54.578 15:19:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.578 15:19:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.837 15:19:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:11:54.837 15:19:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:54.837 15:19:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.837 15:19:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.837 [2024-11-20 15:19:41.073797] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:54.837 [2024-11-20 15:19:41.073865] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:54.837 [2024-11-20 15:19:41.073889] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:11:54.837 [2024-11-20 15:19:41.073901] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:54.837 [2024-11-20 15:19:41.074357] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:54.837 [2024-11-20 15:19:41.074386] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:54.837 [2024-11-20 15:19:41.074475] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:11:54.837 [2024-11-20 15:19:41.074500] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:54.837 [2024-11-20 15:19:41.074639] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:11:54.837 [2024-11-20 15:19:41.074675] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:54.837 [2024-11-20 15:19:41.074964] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:11:54.837 [2024-11-20 15:19:41.075111] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:11:54.837 [2024-11-20 15:19:41.075129] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:11:54.837 [2024-11-20 15:19:41.075280] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:54.837 pt4 00:11:54.837 15:19:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.837 15:19:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:54.837 15:19:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:54.837 15:19:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:54.837 15:19:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:54.837 15:19:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:54.837 15:19:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:54.837 15:19:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:54.837 15:19:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:54.837 15:19:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:54.837 15:19:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:54.837 15:19:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:54.837 15:19:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.837 15:19:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.837 15:19:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:54.837 15:19:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.837 15:19:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:54.837 "name": "raid_bdev1", 00:11:54.837 "uuid": "76f76497-b3b0-4c04-9a42-8d6addc7861c", 00:11:54.837 "strip_size_kb": 0, 00:11:54.837 "state": "online", 00:11:54.837 "raid_level": "raid1", 00:11:54.837 "superblock": true, 00:11:54.837 "num_base_bdevs": 4, 00:11:54.837 "num_base_bdevs_discovered": 3, 00:11:54.837 "num_base_bdevs_operational": 3, 00:11:54.837 "base_bdevs_list": [ 00:11:54.837 { 00:11:54.837 "name": null, 00:11:54.837 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:54.837 "is_configured": false, 00:11:54.837 "data_offset": 2048, 00:11:54.837 "data_size": 63488 00:11:54.837 }, 00:11:54.837 { 00:11:54.837 "name": "pt2", 00:11:54.837 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:54.838 "is_configured": true, 00:11:54.838 "data_offset": 2048, 00:11:54.838 "data_size": 63488 00:11:54.838 }, 00:11:54.838 { 00:11:54.838 "name": "pt3", 00:11:54.838 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:54.838 "is_configured": true, 00:11:54.838 "data_offset": 2048, 00:11:54.838 "data_size": 63488 00:11:54.838 }, 00:11:54.838 { 00:11:54.838 "name": "pt4", 00:11:54.838 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:54.838 "is_configured": true, 00:11:54.838 "data_offset": 2048, 00:11:54.838 "data_size": 63488 00:11:54.838 } 00:11:54.838 ] 00:11:54.838 }' 00:11:54.838 15:19:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:54.838 15:19:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.095 15:19:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:11:55.095 15:19:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:11:55.095 15:19:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.095 15:19:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.095 15:19:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.095 15:19:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:11:55.095 15:19:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:55.095 15:19:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:11:55.095 15:19:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.095 15:19:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.095 [2024-11-20 15:19:41.514064] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:55.095 15:19:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.095 15:19:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 76f76497-b3b0-4c04-9a42-8d6addc7861c '!=' 76f76497-b3b0-4c04-9a42-8d6addc7861c ']' 00:11:55.095 15:19:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 74346 00:11:55.095 15:19:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 74346 ']' 00:11:55.095 15:19:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 74346 00:11:55.095 15:19:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:11:55.095 15:19:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:55.095 15:19:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74346 00:11:55.354 15:19:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:55.354 15:19:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:55.354 killing process with pid 74346 00:11:55.354 15:19:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74346' 00:11:55.354 15:19:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 74346 00:11:55.354 [2024-11-20 15:19:41.597110] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:55.354 [2024-11-20 15:19:41.597206] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:55.354 15:19:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 74346 00:11:55.354 [2024-11-20 15:19:41.597279] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:55.354 [2024-11-20 15:19:41.597293] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:11:55.613 [2024-11-20 15:19:42.003245] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:56.990 15:19:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:11:56.990 00:11:56.990 real 0m8.213s 00:11:56.990 user 0m12.915s 00:11:56.990 sys 0m1.705s 00:11:56.990 15:19:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:56.990 15:19:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.990 ************************************ 00:11:56.990 END TEST raid_superblock_test 00:11:56.990 ************************************ 00:11:56.990 15:19:43 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 4 read 00:11:56.990 15:19:43 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:56.990 15:19:43 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:56.990 15:19:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:56.990 ************************************ 00:11:56.990 START TEST raid_read_error_test 00:11:56.990 ************************************ 00:11:56.990 15:19:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 4 read 00:11:56.990 15:19:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:11:56.990 15:19:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:11:56.990 15:19:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:11:56.990 15:19:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:56.990 15:19:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:56.990 15:19:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:56.990 15:19:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:56.990 15:19:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:56.990 15:19:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:56.990 15:19:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:56.990 15:19:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:56.990 15:19:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:56.990 15:19:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:56.990 15:19:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:56.990 15:19:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:11:56.990 15:19:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:56.990 15:19:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:56.991 15:19:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:56.991 15:19:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:56.991 15:19:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:56.991 15:19:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:56.991 15:19:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:56.991 15:19:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:56.991 15:19:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:56.991 15:19:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:11:56.991 15:19:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:11:56.991 15:19:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:56.991 15:19:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.LYmXCLPkgW 00:11:56.991 15:19:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=74829 00:11:56.991 15:19:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 74829 00:11:56.991 15:19:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:56.991 15:19:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 74829 ']' 00:11:56.991 15:19:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:56.991 15:19:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:56.991 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:56.991 15:19:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:56.991 15:19:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:56.991 15:19:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.991 [2024-11-20 15:19:43.325450] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:11:56.991 [2024-11-20 15:19:43.325575] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74829 ] 00:11:57.250 [2024-11-20 15:19:43.508090] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:57.250 [2024-11-20 15:19:43.630993] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:57.508 [2024-11-20 15:19:43.860845] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:57.508 [2024-11-20 15:19:43.860887] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:57.767 15:19:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:57.767 15:19:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:11:57.767 15:19:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:57.767 15:19:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:57.767 15:19:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.767 15:19:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.767 BaseBdev1_malloc 00:11:57.767 15:19:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.767 15:19:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:57.767 15:19:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.767 15:19:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.767 true 00:11:57.767 15:19:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.767 15:19:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:57.767 15:19:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.767 15:19:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.767 [2024-11-20 15:19:44.243253] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:57.767 [2024-11-20 15:19:44.243313] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:57.767 [2024-11-20 15:19:44.243336] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:57.767 [2024-11-20 15:19:44.243349] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:57.767 [2024-11-20 15:19:44.245673] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:57.767 [2024-11-20 15:19:44.245715] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:57.767 BaseBdev1 00:11:58.026 15:19:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.026 15:19:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:58.026 15:19:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:58.026 15:19:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.026 15:19:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.026 BaseBdev2_malloc 00:11:58.026 15:19:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.026 15:19:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:58.026 15:19:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.026 15:19:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.026 true 00:11:58.026 15:19:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.026 15:19:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:58.026 15:19:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.026 15:19:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.026 [2024-11-20 15:19:44.308199] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:58.026 [2024-11-20 15:19:44.308257] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:58.026 [2024-11-20 15:19:44.308275] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:58.026 [2024-11-20 15:19:44.308288] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:58.026 [2024-11-20 15:19:44.310571] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:58.026 [2024-11-20 15:19:44.310614] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:58.026 BaseBdev2 00:11:58.026 15:19:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.026 15:19:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:58.026 15:19:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:58.026 15:19:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.026 15:19:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.026 BaseBdev3_malloc 00:11:58.026 15:19:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.026 15:19:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:58.026 15:19:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.026 15:19:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.026 true 00:11:58.026 15:19:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.026 15:19:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:58.026 15:19:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.026 15:19:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.026 [2024-11-20 15:19:44.392293] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:58.026 [2024-11-20 15:19:44.392347] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:58.026 [2024-11-20 15:19:44.392367] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:58.026 [2024-11-20 15:19:44.392380] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:58.026 [2024-11-20 15:19:44.394732] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:58.026 [2024-11-20 15:19:44.394783] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:58.026 BaseBdev3 00:11:58.026 15:19:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.026 15:19:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:58.026 15:19:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:11:58.026 15:19:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.026 15:19:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.026 BaseBdev4_malloc 00:11:58.026 15:19:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.026 15:19:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:11:58.026 15:19:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.026 15:19:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.027 true 00:11:58.027 15:19:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.027 15:19:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:11:58.027 15:19:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.027 15:19:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.027 [2024-11-20 15:19:44.463490] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:11:58.027 [2024-11-20 15:19:44.463690] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:58.027 [2024-11-20 15:19:44.463720] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:58.027 [2024-11-20 15:19:44.463735] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:58.027 [2024-11-20 15:19:44.466049] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:58.027 [2024-11-20 15:19:44.466094] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:11:58.027 BaseBdev4 00:11:58.027 15:19:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.027 15:19:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:11:58.027 15:19:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.027 15:19:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.027 [2024-11-20 15:19:44.475533] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:58.027 [2024-11-20 15:19:44.477708] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:58.027 [2024-11-20 15:19:44.477887] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:58.027 [2024-11-20 15:19:44.477962] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:58.027 [2024-11-20 15:19:44.478210] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:11:58.027 [2024-11-20 15:19:44.478227] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:58.027 [2024-11-20 15:19:44.478468] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:11:58.027 [2024-11-20 15:19:44.478619] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:11:58.027 [2024-11-20 15:19:44.478629] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:11:58.027 [2024-11-20 15:19:44.478792] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:58.027 15:19:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.027 15:19:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:11:58.027 15:19:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:58.027 15:19:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:58.027 15:19:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:58.027 15:19:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:58.027 15:19:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:58.027 15:19:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:58.027 15:19:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:58.027 15:19:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:58.027 15:19:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:58.027 15:19:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:58.027 15:19:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:58.027 15:19:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.027 15:19:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.285 15:19:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.285 15:19:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:58.285 "name": "raid_bdev1", 00:11:58.285 "uuid": "570d4538-a6b8-4610-a261-4ea43aedf4f9", 00:11:58.285 "strip_size_kb": 0, 00:11:58.285 "state": "online", 00:11:58.285 "raid_level": "raid1", 00:11:58.285 "superblock": true, 00:11:58.285 "num_base_bdevs": 4, 00:11:58.285 "num_base_bdevs_discovered": 4, 00:11:58.285 "num_base_bdevs_operational": 4, 00:11:58.286 "base_bdevs_list": [ 00:11:58.286 { 00:11:58.286 "name": "BaseBdev1", 00:11:58.286 "uuid": "7d5e1e44-59f9-5f03-8339-a89522c6ae1d", 00:11:58.286 "is_configured": true, 00:11:58.286 "data_offset": 2048, 00:11:58.286 "data_size": 63488 00:11:58.286 }, 00:11:58.286 { 00:11:58.286 "name": "BaseBdev2", 00:11:58.286 "uuid": "481d5a22-cdbe-522d-9af4-b45858e35d23", 00:11:58.286 "is_configured": true, 00:11:58.286 "data_offset": 2048, 00:11:58.286 "data_size": 63488 00:11:58.286 }, 00:11:58.286 { 00:11:58.286 "name": "BaseBdev3", 00:11:58.286 "uuid": "39c8b202-ad4f-5e8d-884b-c8678a10864d", 00:11:58.286 "is_configured": true, 00:11:58.286 "data_offset": 2048, 00:11:58.286 "data_size": 63488 00:11:58.286 }, 00:11:58.286 { 00:11:58.286 "name": "BaseBdev4", 00:11:58.286 "uuid": "25fe50c2-f56e-5474-838a-0766d6d47707", 00:11:58.286 "is_configured": true, 00:11:58.286 "data_offset": 2048, 00:11:58.286 "data_size": 63488 00:11:58.286 } 00:11:58.286 ] 00:11:58.286 }' 00:11:58.286 15:19:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:58.286 15:19:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.544 15:19:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:58.544 15:19:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:58.544 [2024-11-20 15:19:44.964252] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:11:59.479 15:19:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:11:59.479 15:19:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.479 15:19:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.479 15:19:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.479 15:19:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:59.479 15:19:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:11:59.479 15:19:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:11:59.479 15:19:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:11:59.479 15:19:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:11:59.479 15:19:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:59.479 15:19:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:59.479 15:19:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:59.479 15:19:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:59.479 15:19:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:59.479 15:19:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:59.479 15:19:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:59.479 15:19:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:59.479 15:19:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:59.479 15:19:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:59.479 15:19:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.479 15:19:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:59.479 15:19:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.479 15:19:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.479 15:19:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:59.479 "name": "raid_bdev1", 00:11:59.479 "uuid": "570d4538-a6b8-4610-a261-4ea43aedf4f9", 00:11:59.479 "strip_size_kb": 0, 00:11:59.479 "state": "online", 00:11:59.479 "raid_level": "raid1", 00:11:59.479 "superblock": true, 00:11:59.479 "num_base_bdevs": 4, 00:11:59.479 "num_base_bdevs_discovered": 4, 00:11:59.479 "num_base_bdevs_operational": 4, 00:11:59.479 "base_bdevs_list": [ 00:11:59.479 { 00:11:59.479 "name": "BaseBdev1", 00:11:59.479 "uuid": "7d5e1e44-59f9-5f03-8339-a89522c6ae1d", 00:11:59.479 "is_configured": true, 00:11:59.479 "data_offset": 2048, 00:11:59.479 "data_size": 63488 00:11:59.479 }, 00:11:59.479 { 00:11:59.479 "name": "BaseBdev2", 00:11:59.479 "uuid": "481d5a22-cdbe-522d-9af4-b45858e35d23", 00:11:59.479 "is_configured": true, 00:11:59.479 "data_offset": 2048, 00:11:59.479 "data_size": 63488 00:11:59.479 }, 00:11:59.479 { 00:11:59.480 "name": "BaseBdev3", 00:11:59.480 "uuid": "39c8b202-ad4f-5e8d-884b-c8678a10864d", 00:11:59.480 "is_configured": true, 00:11:59.480 "data_offset": 2048, 00:11:59.480 "data_size": 63488 00:11:59.480 }, 00:11:59.480 { 00:11:59.480 "name": "BaseBdev4", 00:11:59.480 "uuid": "25fe50c2-f56e-5474-838a-0766d6d47707", 00:11:59.480 "is_configured": true, 00:11:59.480 "data_offset": 2048, 00:11:59.480 "data_size": 63488 00:11:59.480 } 00:11:59.480 ] 00:11:59.480 }' 00:11:59.480 15:19:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:59.480 15:19:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.084 15:19:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:00.084 15:19:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.084 15:19:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.084 [2024-11-20 15:19:46.308320] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:00.084 [2024-11-20 15:19:46.308482] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:00.084 [2024-11-20 15:19:46.311262] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:00.084 [2024-11-20 15:19:46.311442] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:00.084 [2024-11-20 15:19:46.311597] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:00.084 [2024-11-20 15:19:46.311768] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:12:00.084 15:19:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.084 15:19:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 74829 00:12:00.084 { 00:12:00.084 "results": [ 00:12:00.084 { 00:12:00.084 "job": "raid_bdev1", 00:12:00.084 "core_mask": "0x1", 00:12:00.084 "workload": "randrw", 00:12:00.084 "percentage": 50, 00:12:00.084 "status": "finished", 00:12:00.084 "queue_depth": 1, 00:12:00.084 "io_size": 131072, 00:12:00.084 "runtime": 1.34437, 00:12:00.084 "iops": 11171.775627245475, 00:12:00.084 "mibps": 1396.4719534056844, 00:12:00.084 "io_failed": 0, 00:12:00.084 "io_timeout": 0, 00:12:00.084 "avg_latency_us": 86.78753846199098, 00:12:00.084 "min_latency_us": 24.469076305220884, 00:12:00.084 "max_latency_us": 1460.7421686746989 00:12:00.084 } 00:12:00.084 ], 00:12:00.084 "core_count": 1 00:12:00.084 } 00:12:00.084 15:19:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 74829 ']' 00:12:00.084 15:19:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 74829 00:12:00.084 15:19:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:12:00.084 15:19:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:00.084 15:19:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74829 00:12:00.084 killing process with pid 74829 00:12:00.084 15:19:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:00.084 15:19:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:00.084 15:19:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74829' 00:12:00.084 15:19:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 74829 00:12:00.084 [2024-11-20 15:19:46.344535] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:00.084 15:19:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 74829 00:12:00.344 [2024-11-20 15:19:46.674592] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:01.719 15:19:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.LYmXCLPkgW 00:12:01.719 15:19:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:12:01.719 15:19:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:12:01.719 15:19:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:12:01.719 15:19:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:12:01.719 ************************************ 00:12:01.719 END TEST raid_read_error_test 00:12:01.719 ************************************ 00:12:01.719 15:19:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:01.719 15:19:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:12:01.719 15:19:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:12:01.719 00:12:01.719 real 0m4.713s 00:12:01.719 user 0m5.472s 00:12:01.719 sys 0m0.635s 00:12:01.719 15:19:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:01.719 15:19:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.719 15:19:47 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 4 write 00:12:01.719 15:19:47 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:01.719 15:19:47 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:01.719 15:19:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:01.719 ************************************ 00:12:01.719 START TEST raid_write_error_test 00:12:01.719 ************************************ 00:12:01.719 15:19:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 4 write 00:12:01.719 15:19:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:12:01.719 15:19:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:12:01.719 15:19:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:12:01.719 15:19:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:12:01.719 15:19:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:01.719 15:19:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:12:01.719 15:19:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:01.719 15:19:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:01.719 15:19:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:12:01.719 15:19:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:01.719 15:19:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:01.719 15:19:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:12:01.719 15:19:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:01.719 15:19:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:01.719 15:19:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:12:01.719 15:19:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:01.719 15:19:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:01.719 15:19:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:01.719 15:19:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:12:01.719 15:19:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:12:01.719 15:19:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:12:01.719 15:19:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:12:01.719 15:19:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:12:01.719 15:19:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:12:01.719 15:19:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:12:01.719 15:19:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:12:01.719 15:19:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:12:01.719 15:19:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.83CuCVift8 00:12:01.719 15:19:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=74969 00:12:01.719 15:19:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:01.719 15:19:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 74969 00:12:01.719 15:19:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 74969 ']' 00:12:01.719 15:19:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:01.719 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:01.719 15:19:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:01.719 15:19:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:01.719 15:19:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:01.719 15:19:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.719 [2024-11-20 15:19:48.113513] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:12:01.719 [2024-11-20 15:19:48.113642] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74969 ] 00:12:01.977 [2024-11-20 15:19:48.298696] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:01.977 [2024-11-20 15:19:48.420606] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:02.235 [2024-11-20 15:19:48.634595] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:02.235 [2024-11-20 15:19:48.634860] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:02.493 15:19:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:02.493 15:19:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:12:02.493 15:19:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:02.493 15:19:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:02.493 15:19:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.493 15:19:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.752 BaseBdev1_malloc 00:12:02.752 15:19:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.752 15:19:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:12:02.752 15:19:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.752 15:19:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.752 true 00:12:02.752 15:19:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.752 15:19:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:02.752 15:19:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.752 15:19:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.752 [2024-11-20 15:19:49.034740] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:02.752 [2024-11-20 15:19:49.034950] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:02.752 [2024-11-20 15:19:49.034986] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:12:02.752 [2024-11-20 15:19:49.035002] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:02.752 [2024-11-20 15:19:49.037638] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:02.752 [2024-11-20 15:19:49.037700] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:02.752 BaseBdev1 00:12:02.752 15:19:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.752 15:19:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:02.752 15:19:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:02.752 15:19:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.752 15:19:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.752 BaseBdev2_malloc 00:12:02.752 15:19:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.752 15:19:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:12:02.752 15:19:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.752 15:19:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.752 true 00:12:02.752 15:19:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.752 15:19:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:02.752 15:19:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.752 15:19:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.752 [2024-11-20 15:19:49.103859] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:02.752 [2024-11-20 15:19:49.103917] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:02.752 [2024-11-20 15:19:49.103936] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:12:02.752 [2024-11-20 15:19:49.103950] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:02.752 [2024-11-20 15:19:49.106244] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:02.752 [2024-11-20 15:19:49.106288] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:02.752 BaseBdev2 00:12:02.752 15:19:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.752 15:19:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:02.752 15:19:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:02.752 15:19:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.752 15:19:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.752 BaseBdev3_malloc 00:12:02.752 15:19:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.752 15:19:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:12:02.752 15:19:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.752 15:19:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.752 true 00:12:02.752 15:19:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.752 15:19:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:12:02.752 15:19:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.752 15:19:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.752 [2024-11-20 15:19:49.184937] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:12:02.752 [2024-11-20 15:19:49.185127] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:02.753 [2024-11-20 15:19:49.185157] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:12:02.753 [2024-11-20 15:19:49.185172] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:02.753 [2024-11-20 15:19:49.187848] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:02.753 [2024-11-20 15:19:49.187905] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:02.753 BaseBdev3 00:12:02.753 15:19:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.753 15:19:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:02.753 15:19:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:12:02.753 15:19:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.753 15:19:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.011 BaseBdev4_malloc 00:12:03.011 15:19:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.011 15:19:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:12:03.011 15:19:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.011 15:19:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.011 true 00:12:03.011 15:19:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.011 15:19:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:12:03.011 15:19:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.011 15:19:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.011 [2024-11-20 15:19:49.255252] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:12:03.011 [2024-11-20 15:19:49.255314] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:03.011 [2024-11-20 15:19:49.255336] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:03.011 [2024-11-20 15:19:49.255351] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:03.011 [2024-11-20 15:19:49.257951] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:03.011 [2024-11-20 15:19:49.258001] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:12:03.011 BaseBdev4 00:12:03.011 15:19:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.011 15:19:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:12:03.011 15:19:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.011 15:19:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.011 [2024-11-20 15:19:49.267301] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:03.011 [2024-11-20 15:19:49.269950] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:03.011 [2024-11-20 15:19:49.270170] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:03.011 [2024-11-20 15:19:49.270393] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:03.011 [2024-11-20 15:19:49.270768] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:12:03.011 [2024-11-20 15:19:49.270914] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:03.011 [2024-11-20 15:19:49.271267] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:12:03.011 [2024-11-20 15:19:49.271613] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:12:03.011 [2024-11-20 15:19:49.271741] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:12:03.011 [2024-11-20 15:19:49.272096] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:03.011 15:19:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.011 15:19:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:03.011 15:19:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:03.011 15:19:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:03.011 15:19:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:03.011 15:19:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:03.011 15:19:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:03.011 15:19:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:03.011 15:19:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:03.011 15:19:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:03.011 15:19:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:03.011 15:19:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:03.011 15:19:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:03.011 15:19:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.011 15:19:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.011 15:19:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.011 15:19:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:03.011 "name": "raid_bdev1", 00:12:03.011 "uuid": "013a63a1-233b-4427-921f-2e87905975ea", 00:12:03.011 "strip_size_kb": 0, 00:12:03.011 "state": "online", 00:12:03.011 "raid_level": "raid1", 00:12:03.011 "superblock": true, 00:12:03.011 "num_base_bdevs": 4, 00:12:03.011 "num_base_bdevs_discovered": 4, 00:12:03.011 "num_base_bdevs_operational": 4, 00:12:03.011 "base_bdevs_list": [ 00:12:03.011 { 00:12:03.011 "name": "BaseBdev1", 00:12:03.011 "uuid": "7252d7ae-f7df-52de-90d7-a62aa13e1205", 00:12:03.011 "is_configured": true, 00:12:03.011 "data_offset": 2048, 00:12:03.011 "data_size": 63488 00:12:03.011 }, 00:12:03.011 { 00:12:03.011 "name": "BaseBdev2", 00:12:03.011 "uuid": "bd48d371-78d5-5818-9c66-15584b53950d", 00:12:03.011 "is_configured": true, 00:12:03.011 "data_offset": 2048, 00:12:03.011 "data_size": 63488 00:12:03.011 }, 00:12:03.011 { 00:12:03.011 "name": "BaseBdev3", 00:12:03.011 "uuid": "d243bc05-7ee9-5bdb-86b3-5d790b7c0012", 00:12:03.011 "is_configured": true, 00:12:03.011 "data_offset": 2048, 00:12:03.011 "data_size": 63488 00:12:03.011 }, 00:12:03.011 { 00:12:03.011 "name": "BaseBdev4", 00:12:03.011 "uuid": "3602e598-429d-59c2-b597-8ac653f0139d", 00:12:03.011 "is_configured": true, 00:12:03.011 "data_offset": 2048, 00:12:03.011 "data_size": 63488 00:12:03.011 } 00:12:03.011 ] 00:12:03.011 }' 00:12:03.011 15:19:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:03.011 15:19:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.270 15:19:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:12:03.270 15:19:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:03.528 [2024-11-20 15:19:49.760798] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:12:04.465 15:19:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:12:04.465 15:19:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.465 15:19:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.465 [2024-11-20 15:19:50.691737] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:12:04.465 [2024-11-20 15:19:50.691803] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:04.465 [2024-11-20 15:19:50.692032] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006a40 00:12:04.465 15:19:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.465 15:19:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:12:04.465 15:19:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:12:04.465 15:19:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:12:04.465 15:19:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:12:04.465 15:19:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:04.465 15:19:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:04.465 15:19:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:04.465 15:19:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:04.465 15:19:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:04.465 15:19:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:04.465 15:19:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:04.465 15:19:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:04.465 15:19:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:04.465 15:19:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:04.465 15:19:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:04.465 15:19:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:04.465 15:19:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.465 15:19:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.465 15:19:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.465 15:19:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:04.465 "name": "raid_bdev1", 00:12:04.465 "uuid": "013a63a1-233b-4427-921f-2e87905975ea", 00:12:04.465 "strip_size_kb": 0, 00:12:04.465 "state": "online", 00:12:04.465 "raid_level": "raid1", 00:12:04.465 "superblock": true, 00:12:04.465 "num_base_bdevs": 4, 00:12:04.465 "num_base_bdevs_discovered": 3, 00:12:04.465 "num_base_bdevs_operational": 3, 00:12:04.465 "base_bdevs_list": [ 00:12:04.465 { 00:12:04.465 "name": null, 00:12:04.465 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:04.465 "is_configured": false, 00:12:04.465 "data_offset": 0, 00:12:04.465 "data_size": 63488 00:12:04.465 }, 00:12:04.465 { 00:12:04.465 "name": "BaseBdev2", 00:12:04.465 "uuid": "bd48d371-78d5-5818-9c66-15584b53950d", 00:12:04.465 "is_configured": true, 00:12:04.465 "data_offset": 2048, 00:12:04.466 "data_size": 63488 00:12:04.466 }, 00:12:04.466 { 00:12:04.466 "name": "BaseBdev3", 00:12:04.466 "uuid": "d243bc05-7ee9-5bdb-86b3-5d790b7c0012", 00:12:04.466 "is_configured": true, 00:12:04.466 "data_offset": 2048, 00:12:04.466 "data_size": 63488 00:12:04.466 }, 00:12:04.466 { 00:12:04.466 "name": "BaseBdev4", 00:12:04.466 "uuid": "3602e598-429d-59c2-b597-8ac653f0139d", 00:12:04.466 "is_configured": true, 00:12:04.466 "data_offset": 2048, 00:12:04.466 "data_size": 63488 00:12:04.466 } 00:12:04.466 ] 00:12:04.466 }' 00:12:04.466 15:19:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:04.466 15:19:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.725 15:19:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:04.725 15:19:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.725 15:19:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.725 [2024-11-20 15:19:51.087073] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:04.725 [2024-11-20 15:19:51.087270] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:04.725 [2024-11-20 15:19:51.090308] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:04.725 [2024-11-20 15:19:51.090472] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:04.725 [2024-11-20 15:19:51.090606] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:04.725 [2024-11-20 15:19:51.090622] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:12:04.725 { 00:12:04.725 "results": [ 00:12:04.725 { 00:12:04.725 "job": "raid_bdev1", 00:12:04.725 "core_mask": "0x1", 00:12:04.725 "workload": "randrw", 00:12:04.725 "percentage": 50, 00:12:04.725 "status": "finished", 00:12:04.725 "queue_depth": 1, 00:12:04.725 "io_size": 131072, 00:12:04.725 "runtime": 1.326628, 00:12:04.725 "iops": 11663.405265078078, 00:12:04.725 "mibps": 1457.9256581347597, 00:12:04.725 "io_failed": 0, 00:12:04.725 "io_timeout": 0, 00:12:04.725 "avg_latency_us": 82.96751854571392, 00:12:04.725 "min_latency_us": 24.777510040160642, 00:12:04.725 "max_latency_us": 1506.8016064257029 00:12:04.725 } 00:12:04.725 ], 00:12:04.725 "core_count": 1 00:12:04.725 } 00:12:04.725 15:19:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.725 15:19:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 74969 00:12:04.725 15:19:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 74969 ']' 00:12:04.725 15:19:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 74969 00:12:04.725 15:19:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:12:04.725 15:19:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:04.725 15:19:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74969 00:12:04.725 killing process with pid 74969 00:12:04.725 15:19:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:04.725 15:19:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:04.725 15:19:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74969' 00:12:04.725 15:19:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 74969 00:12:04.725 [2024-11-20 15:19:51.142643] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:04.725 15:19:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 74969 00:12:05.292 [2024-11-20 15:19:51.475927] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:06.229 15:19:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.83CuCVift8 00:12:06.229 15:19:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:12:06.229 15:19:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:12:06.229 15:19:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:12:06.229 15:19:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:12:06.229 ************************************ 00:12:06.229 END TEST raid_write_error_test 00:12:06.229 ************************************ 00:12:06.229 15:19:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:06.229 15:19:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:12:06.229 15:19:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:12:06.229 00:12:06.229 real 0m4.689s 00:12:06.229 user 0m5.453s 00:12:06.229 sys 0m0.628s 00:12:06.230 15:19:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:06.230 15:19:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.489 15:19:52 bdev_raid -- bdev/bdev_raid.sh@976 -- # '[' true = true ']' 00:12:06.489 15:19:52 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:12:06.489 15:19:52 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 2 false false true 00:12:06.489 15:19:52 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:12:06.489 15:19:52 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:06.489 15:19:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:06.489 ************************************ 00:12:06.489 START TEST raid_rebuild_test 00:12:06.489 ************************************ 00:12:06.489 15:19:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 false false true 00:12:06.489 15:19:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:12:06.489 15:19:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:12:06.489 15:19:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:12:06.489 15:19:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:12:06.489 15:19:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:12:06.489 15:19:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:12:06.489 15:19:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:06.489 15:19:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:12:06.489 15:19:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:06.489 15:19:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:06.489 15:19:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:12:06.489 15:19:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:06.489 15:19:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:06.489 15:19:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:12:06.489 15:19:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:12:06.489 15:19:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:12:06.489 15:19:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:12:06.489 15:19:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:12:06.489 15:19:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:12:06.489 15:19:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:12:06.489 15:19:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:12:06.489 15:19:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:12:06.489 15:19:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:12:06.489 15:19:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=75118 00:12:06.489 15:19:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 75118 00:12:06.489 15:19:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:12:06.489 15:19:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 75118 ']' 00:12:06.489 15:19:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:06.489 15:19:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:06.489 15:19:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:06.489 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:06.489 15:19:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:06.489 15:19:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.489 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:06.489 Zero copy mechanism will not be used. 00:12:06.489 [2024-11-20 15:19:52.891298] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:12:06.489 [2024-11-20 15:19:52.891493] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75118 ] 00:12:06.748 [2024-11-20 15:19:53.091489] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:06.748 [2024-11-20 15:19:53.208219] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:07.009 [2024-11-20 15:19:53.417154] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:07.009 [2024-11-20 15:19:53.417201] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:07.268 15:19:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:07.268 15:19:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:12:07.268 15:19:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:07.268 15:19:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:07.268 15:19:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.268 15:19:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.526 BaseBdev1_malloc 00:12:07.526 15:19:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.526 15:19:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:07.526 15:19:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.526 15:19:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.526 [2024-11-20 15:19:53.780349] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:07.526 [2024-11-20 15:19:53.780418] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:07.526 [2024-11-20 15:19:53.780442] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:07.526 [2024-11-20 15:19:53.780456] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:07.526 [2024-11-20 15:19:53.782850] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:07.526 [2024-11-20 15:19:53.783032] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:07.526 BaseBdev1 00:12:07.526 15:19:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.527 15:19:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:07.527 15:19:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:07.527 15:19:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.527 15:19:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.527 BaseBdev2_malloc 00:12:07.527 15:19:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.527 15:19:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:12:07.527 15:19:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.527 15:19:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.527 [2024-11-20 15:19:53.838162] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:12:07.527 [2024-11-20 15:19:53.838341] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:07.527 [2024-11-20 15:19:53.838376] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:07.527 [2024-11-20 15:19:53.838391] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:07.527 [2024-11-20 15:19:53.840808] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:07.527 [2024-11-20 15:19:53.840849] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:07.527 BaseBdev2 00:12:07.527 15:19:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.527 15:19:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:12:07.527 15:19:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.527 15:19:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.527 spare_malloc 00:12:07.527 15:19:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.527 15:19:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:12:07.527 15:19:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.527 15:19:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.527 spare_delay 00:12:07.527 15:19:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.527 15:19:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:07.527 15:19:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.527 15:19:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.527 [2024-11-20 15:19:53.920646] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:07.527 [2024-11-20 15:19:53.920846] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:07.527 [2024-11-20 15:19:53.920876] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:12:07.527 [2024-11-20 15:19:53.920891] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:07.527 [2024-11-20 15:19:53.923343] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:07.527 [2024-11-20 15:19:53.923393] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:07.527 spare 00:12:07.527 15:19:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.527 15:19:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:12:07.527 15:19:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.527 15:19:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.527 [2024-11-20 15:19:53.932692] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:07.527 [2024-11-20 15:19:53.934727] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:07.527 [2024-11-20 15:19:53.934849] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:07.527 [2024-11-20 15:19:53.934867] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:12:07.527 [2024-11-20 15:19:53.935140] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:07.527 [2024-11-20 15:19:53.935323] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:07.527 [2024-11-20 15:19:53.935335] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:07.527 [2024-11-20 15:19:53.935514] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:07.527 15:19:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.527 15:19:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:07.527 15:19:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:07.527 15:19:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:07.527 15:19:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:07.527 15:19:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:07.527 15:19:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:07.527 15:19:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:07.527 15:19:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:07.527 15:19:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:07.527 15:19:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:07.527 15:19:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:07.527 15:19:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.527 15:19:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:07.527 15:19:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.527 15:19:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.527 15:19:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:07.527 "name": "raid_bdev1", 00:12:07.527 "uuid": "b09e3000-fe2a-4a50-8aa9-a41606ad7c95", 00:12:07.527 "strip_size_kb": 0, 00:12:07.527 "state": "online", 00:12:07.527 "raid_level": "raid1", 00:12:07.527 "superblock": false, 00:12:07.527 "num_base_bdevs": 2, 00:12:07.527 "num_base_bdevs_discovered": 2, 00:12:07.527 "num_base_bdevs_operational": 2, 00:12:07.527 "base_bdevs_list": [ 00:12:07.527 { 00:12:07.527 "name": "BaseBdev1", 00:12:07.527 "uuid": "e6293443-538c-5e18-9466-2d53d24a7523", 00:12:07.527 "is_configured": true, 00:12:07.527 "data_offset": 0, 00:12:07.527 "data_size": 65536 00:12:07.527 }, 00:12:07.527 { 00:12:07.527 "name": "BaseBdev2", 00:12:07.527 "uuid": "40aeaabc-b190-5bde-be61-5d91c514e1ce", 00:12:07.527 "is_configured": true, 00:12:07.527 "data_offset": 0, 00:12:07.527 "data_size": 65536 00:12:07.527 } 00:12:07.527 ] 00:12:07.527 }' 00:12:07.527 15:19:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:07.527 15:19:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.151 15:19:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:12:08.151 15:19:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:08.151 15:19:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.151 15:19:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.151 [2024-11-20 15:19:54.380330] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:08.151 15:19:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.151 15:19:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:12:08.151 15:19:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:08.151 15:19:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:12:08.151 15:19:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.151 15:19:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.151 15:19:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.151 15:19:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:12:08.151 15:19:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:12:08.151 15:19:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:12:08.151 15:19:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:12:08.151 15:19:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:12:08.151 15:19:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:08.151 15:19:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:12:08.151 15:19:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:08.151 15:19:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:12:08.151 15:19:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:08.151 15:19:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:12:08.151 15:19:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:08.151 15:19:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:08.152 15:19:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:12:08.411 [2024-11-20 15:19:54.659873] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:12:08.411 /dev/nbd0 00:12:08.411 15:19:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:08.411 15:19:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:08.411 15:19:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:12:08.411 15:19:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:12:08.411 15:19:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:08.411 15:19:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:08.411 15:19:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:12:08.411 15:19:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:12:08.411 15:19:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:08.411 15:19:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:08.411 15:19:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:08.411 1+0 records in 00:12:08.411 1+0 records out 00:12:08.411 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000328812 s, 12.5 MB/s 00:12:08.411 15:19:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:08.411 15:19:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:12:08.411 15:19:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:08.411 15:19:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:08.411 15:19:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:12:08.411 15:19:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:08.411 15:19:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:08.411 15:19:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:12:08.411 15:19:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:12:08.411 15:19:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:12:12.603 65536+0 records in 00:12:12.603 65536+0 records out 00:12:12.603 33554432 bytes (34 MB, 32 MiB) copied, 3.94117 s, 8.5 MB/s 00:12:12.603 15:19:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:12:12.603 15:19:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:12.603 15:19:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:12.603 15:19:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:12.603 15:19:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:12:12.603 15:19:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:12.603 15:19:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:12.603 15:19:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:12.603 [2024-11-20 15:19:58.889074] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:12.603 15:19:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:12.603 15:19:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:12.603 15:19:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:12.603 15:19:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:12.603 15:19:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:12.603 15:19:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:12:12.603 15:19:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:12:12.603 15:19:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:12:12.603 15:19:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.603 15:19:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.603 [2024-11-20 15:19:58.903707] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:12.603 15:19:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.603 15:19:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:12.603 15:19:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:12.603 15:19:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:12.603 15:19:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:12.603 15:19:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:12.603 15:19:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:12.603 15:19:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:12.603 15:19:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:12.603 15:19:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:12.603 15:19:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:12.603 15:19:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:12.603 15:19:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:12.603 15:19:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.603 15:19:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.603 15:19:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.603 15:19:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:12.603 "name": "raid_bdev1", 00:12:12.603 "uuid": "b09e3000-fe2a-4a50-8aa9-a41606ad7c95", 00:12:12.603 "strip_size_kb": 0, 00:12:12.603 "state": "online", 00:12:12.603 "raid_level": "raid1", 00:12:12.603 "superblock": false, 00:12:12.603 "num_base_bdevs": 2, 00:12:12.603 "num_base_bdevs_discovered": 1, 00:12:12.603 "num_base_bdevs_operational": 1, 00:12:12.603 "base_bdevs_list": [ 00:12:12.603 { 00:12:12.603 "name": null, 00:12:12.603 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:12.603 "is_configured": false, 00:12:12.603 "data_offset": 0, 00:12:12.603 "data_size": 65536 00:12:12.603 }, 00:12:12.603 { 00:12:12.603 "name": "BaseBdev2", 00:12:12.603 "uuid": "40aeaabc-b190-5bde-be61-5d91c514e1ce", 00:12:12.603 "is_configured": true, 00:12:12.603 "data_offset": 0, 00:12:12.603 "data_size": 65536 00:12:12.603 } 00:12:12.603 ] 00:12:12.603 }' 00:12:12.603 15:19:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:12.603 15:19:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.860 15:19:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:12.860 15:19:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.860 15:19:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.860 [2024-11-20 15:19:59.267206] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:12.860 [2024-11-20 15:19:59.284757] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09bd0 00:12:12.860 15:19:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.860 15:19:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:12:12.860 [2024-11-20 15:19:59.286854] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:14.235 15:20:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:14.235 15:20:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:14.235 15:20:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:14.235 15:20:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:14.235 15:20:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:14.235 15:20:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:14.235 15:20:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:14.235 15:20:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.235 15:20:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.235 15:20:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.235 15:20:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:14.235 "name": "raid_bdev1", 00:12:14.235 "uuid": "b09e3000-fe2a-4a50-8aa9-a41606ad7c95", 00:12:14.235 "strip_size_kb": 0, 00:12:14.235 "state": "online", 00:12:14.235 "raid_level": "raid1", 00:12:14.235 "superblock": false, 00:12:14.235 "num_base_bdevs": 2, 00:12:14.235 "num_base_bdevs_discovered": 2, 00:12:14.235 "num_base_bdevs_operational": 2, 00:12:14.235 "process": { 00:12:14.235 "type": "rebuild", 00:12:14.235 "target": "spare", 00:12:14.235 "progress": { 00:12:14.235 "blocks": 20480, 00:12:14.235 "percent": 31 00:12:14.235 } 00:12:14.235 }, 00:12:14.235 "base_bdevs_list": [ 00:12:14.235 { 00:12:14.235 "name": "spare", 00:12:14.235 "uuid": "3cb2f7f0-0364-5b45-8d94-e87dfcb079b6", 00:12:14.235 "is_configured": true, 00:12:14.235 "data_offset": 0, 00:12:14.235 "data_size": 65536 00:12:14.235 }, 00:12:14.235 { 00:12:14.235 "name": "BaseBdev2", 00:12:14.235 "uuid": "40aeaabc-b190-5bde-be61-5d91c514e1ce", 00:12:14.235 "is_configured": true, 00:12:14.235 "data_offset": 0, 00:12:14.235 "data_size": 65536 00:12:14.235 } 00:12:14.235 ] 00:12:14.235 }' 00:12:14.235 15:20:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:14.235 15:20:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:14.235 15:20:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:14.235 15:20:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:14.235 15:20:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:14.235 15:20:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.235 15:20:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.235 [2024-11-20 15:20:00.442875] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:14.235 [2024-11-20 15:20:00.492131] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:14.235 [2024-11-20 15:20:00.492202] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:14.235 [2024-11-20 15:20:00.492219] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:14.235 [2024-11-20 15:20:00.492232] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:14.235 15:20:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.235 15:20:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:14.235 15:20:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:14.235 15:20:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:14.235 15:20:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:14.235 15:20:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:14.235 15:20:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:14.235 15:20:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:14.235 15:20:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:14.235 15:20:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:14.235 15:20:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:14.235 15:20:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:14.235 15:20:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:14.235 15:20:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.235 15:20:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.235 15:20:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.235 15:20:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:14.235 "name": "raid_bdev1", 00:12:14.235 "uuid": "b09e3000-fe2a-4a50-8aa9-a41606ad7c95", 00:12:14.235 "strip_size_kb": 0, 00:12:14.235 "state": "online", 00:12:14.235 "raid_level": "raid1", 00:12:14.235 "superblock": false, 00:12:14.235 "num_base_bdevs": 2, 00:12:14.235 "num_base_bdevs_discovered": 1, 00:12:14.235 "num_base_bdevs_operational": 1, 00:12:14.235 "base_bdevs_list": [ 00:12:14.235 { 00:12:14.235 "name": null, 00:12:14.235 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:14.235 "is_configured": false, 00:12:14.235 "data_offset": 0, 00:12:14.235 "data_size": 65536 00:12:14.235 }, 00:12:14.235 { 00:12:14.235 "name": "BaseBdev2", 00:12:14.235 "uuid": "40aeaabc-b190-5bde-be61-5d91c514e1ce", 00:12:14.235 "is_configured": true, 00:12:14.236 "data_offset": 0, 00:12:14.236 "data_size": 65536 00:12:14.236 } 00:12:14.236 ] 00:12:14.236 }' 00:12:14.236 15:20:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:14.236 15:20:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.803 15:20:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:14.803 15:20:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:14.803 15:20:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:14.803 15:20:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:14.803 15:20:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:14.803 15:20:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:14.803 15:20:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:14.803 15:20:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.803 15:20:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.803 15:20:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.803 15:20:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:14.803 "name": "raid_bdev1", 00:12:14.803 "uuid": "b09e3000-fe2a-4a50-8aa9-a41606ad7c95", 00:12:14.803 "strip_size_kb": 0, 00:12:14.803 "state": "online", 00:12:14.803 "raid_level": "raid1", 00:12:14.803 "superblock": false, 00:12:14.803 "num_base_bdevs": 2, 00:12:14.803 "num_base_bdevs_discovered": 1, 00:12:14.803 "num_base_bdevs_operational": 1, 00:12:14.803 "base_bdevs_list": [ 00:12:14.803 { 00:12:14.803 "name": null, 00:12:14.803 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:14.803 "is_configured": false, 00:12:14.803 "data_offset": 0, 00:12:14.803 "data_size": 65536 00:12:14.803 }, 00:12:14.803 { 00:12:14.803 "name": "BaseBdev2", 00:12:14.803 "uuid": "40aeaabc-b190-5bde-be61-5d91c514e1ce", 00:12:14.803 "is_configured": true, 00:12:14.803 "data_offset": 0, 00:12:14.803 "data_size": 65536 00:12:14.803 } 00:12:14.803 ] 00:12:14.803 }' 00:12:14.803 15:20:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:14.803 15:20:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:14.803 15:20:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:14.803 15:20:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:14.803 15:20:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:14.803 15:20:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.803 15:20:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.803 [2024-11-20 15:20:01.087815] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:14.803 [2024-11-20 15:20:01.104196] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09ca0 00:12:14.803 15:20:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.803 15:20:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:12:14.803 [2024-11-20 15:20:01.106266] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:15.737 15:20:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:15.737 15:20:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:15.737 15:20:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:15.737 15:20:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:15.737 15:20:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:15.737 15:20:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:15.737 15:20:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.737 15:20:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:15.737 15:20:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.737 15:20:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.737 15:20:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:15.737 "name": "raid_bdev1", 00:12:15.737 "uuid": "b09e3000-fe2a-4a50-8aa9-a41606ad7c95", 00:12:15.737 "strip_size_kb": 0, 00:12:15.737 "state": "online", 00:12:15.737 "raid_level": "raid1", 00:12:15.737 "superblock": false, 00:12:15.737 "num_base_bdevs": 2, 00:12:15.737 "num_base_bdevs_discovered": 2, 00:12:15.737 "num_base_bdevs_operational": 2, 00:12:15.737 "process": { 00:12:15.737 "type": "rebuild", 00:12:15.737 "target": "spare", 00:12:15.737 "progress": { 00:12:15.737 "blocks": 20480, 00:12:15.737 "percent": 31 00:12:15.737 } 00:12:15.737 }, 00:12:15.737 "base_bdevs_list": [ 00:12:15.737 { 00:12:15.737 "name": "spare", 00:12:15.737 "uuid": "3cb2f7f0-0364-5b45-8d94-e87dfcb079b6", 00:12:15.737 "is_configured": true, 00:12:15.737 "data_offset": 0, 00:12:15.737 "data_size": 65536 00:12:15.737 }, 00:12:15.737 { 00:12:15.737 "name": "BaseBdev2", 00:12:15.737 "uuid": "40aeaabc-b190-5bde-be61-5d91c514e1ce", 00:12:15.737 "is_configured": true, 00:12:15.737 "data_offset": 0, 00:12:15.737 "data_size": 65536 00:12:15.737 } 00:12:15.737 ] 00:12:15.737 }' 00:12:15.737 15:20:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:15.737 15:20:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:15.737 15:20:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:15.997 15:20:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:15.997 15:20:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:12:15.997 15:20:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:12:15.997 15:20:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:12:15.997 15:20:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:12:15.997 15:20:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=366 00:12:15.997 15:20:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:15.997 15:20:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:15.997 15:20:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:15.997 15:20:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:15.997 15:20:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:15.997 15:20:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:15.997 15:20:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:15.997 15:20:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.997 15:20:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:15.997 15:20:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.997 15:20:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.997 15:20:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:15.997 "name": "raid_bdev1", 00:12:15.997 "uuid": "b09e3000-fe2a-4a50-8aa9-a41606ad7c95", 00:12:15.997 "strip_size_kb": 0, 00:12:15.997 "state": "online", 00:12:15.997 "raid_level": "raid1", 00:12:15.997 "superblock": false, 00:12:15.997 "num_base_bdevs": 2, 00:12:15.997 "num_base_bdevs_discovered": 2, 00:12:15.997 "num_base_bdevs_operational": 2, 00:12:15.997 "process": { 00:12:15.997 "type": "rebuild", 00:12:15.997 "target": "spare", 00:12:15.997 "progress": { 00:12:15.997 "blocks": 22528, 00:12:15.997 "percent": 34 00:12:15.997 } 00:12:15.997 }, 00:12:15.997 "base_bdevs_list": [ 00:12:15.997 { 00:12:15.997 "name": "spare", 00:12:15.997 "uuid": "3cb2f7f0-0364-5b45-8d94-e87dfcb079b6", 00:12:15.997 "is_configured": true, 00:12:15.997 "data_offset": 0, 00:12:15.997 "data_size": 65536 00:12:15.997 }, 00:12:15.997 { 00:12:15.997 "name": "BaseBdev2", 00:12:15.997 "uuid": "40aeaabc-b190-5bde-be61-5d91c514e1ce", 00:12:15.997 "is_configured": true, 00:12:15.997 "data_offset": 0, 00:12:15.997 "data_size": 65536 00:12:15.997 } 00:12:15.997 ] 00:12:15.997 }' 00:12:15.997 15:20:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:15.997 15:20:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:15.997 15:20:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:15.997 15:20:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:15.997 15:20:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:16.931 15:20:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:16.931 15:20:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:16.931 15:20:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:16.931 15:20:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:16.931 15:20:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:16.931 15:20:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:16.931 15:20:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:16.931 15:20:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:16.931 15:20:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.931 15:20:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.190 15:20:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.190 15:20:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:17.190 "name": "raid_bdev1", 00:12:17.190 "uuid": "b09e3000-fe2a-4a50-8aa9-a41606ad7c95", 00:12:17.190 "strip_size_kb": 0, 00:12:17.190 "state": "online", 00:12:17.190 "raid_level": "raid1", 00:12:17.190 "superblock": false, 00:12:17.190 "num_base_bdevs": 2, 00:12:17.190 "num_base_bdevs_discovered": 2, 00:12:17.190 "num_base_bdevs_operational": 2, 00:12:17.190 "process": { 00:12:17.190 "type": "rebuild", 00:12:17.190 "target": "spare", 00:12:17.190 "progress": { 00:12:17.190 "blocks": 45056, 00:12:17.190 "percent": 68 00:12:17.190 } 00:12:17.190 }, 00:12:17.190 "base_bdevs_list": [ 00:12:17.190 { 00:12:17.190 "name": "spare", 00:12:17.190 "uuid": "3cb2f7f0-0364-5b45-8d94-e87dfcb079b6", 00:12:17.190 "is_configured": true, 00:12:17.190 "data_offset": 0, 00:12:17.190 "data_size": 65536 00:12:17.190 }, 00:12:17.190 { 00:12:17.190 "name": "BaseBdev2", 00:12:17.190 "uuid": "40aeaabc-b190-5bde-be61-5d91c514e1ce", 00:12:17.190 "is_configured": true, 00:12:17.190 "data_offset": 0, 00:12:17.190 "data_size": 65536 00:12:17.190 } 00:12:17.190 ] 00:12:17.190 }' 00:12:17.190 15:20:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:17.190 15:20:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:17.190 15:20:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:17.190 15:20:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:17.191 15:20:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:18.138 [2024-11-20 15:20:04.320145] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:12:18.138 [2024-11-20 15:20:04.320229] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:12:18.138 [2024-11-20 15:20:04.320282] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:18.138 15:20:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:18.138 15:20:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:18.138 15:20:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:18.138 15:20:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:18.138 15:20:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:18.138 15:20:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:18.138 15:20:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:18.138 15:20:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:18.138 15:20:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.138 15:20:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.138 15:20:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.138 15:20:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:18.138 "name": "raid_bdev1", 00:12:18.139 "uuid": "b09e3000-fe2a-4a50-8aa9-a41606ad7c95", 00:12:18.139 "strip_size_kb": 0, 00:12:18.139 "state": "online", 00:12:18.139 "raid_level": "raid1", 00:12:18.139 "superblock": false, 00:12:18.139 "num_base_bdevs": 2, 00:12:18.139 "num_base_bdevs_discovered": 2, 00:12:18.139 "num_base_bdevs_operational": 2, 00:12:18.139 "base_bdevs_list": [ 00:12:18.139 { 00:12:18.139 "name": "spare", 00:12:18.139 "uuid": "3cb2f7f0-0364-5b45-8d94-e87dfcb079b6", 00:12:18.139 "is_configured": true, 00:12:18.139 "data_offset": 0, 00:12:18.139 "data_size": 65536 00:12:18.139 }, 00:12:18.139 { 00:12:18.139 "name": "BaseBdev2", 00:12:18.139 "uuid": "40aeaabc-b190-5bde-be61-5d91c514e1ce", 00:12:18.139 "is_configured": true, 00:12:18.139 "data_offset": 0, 00:12:18.139 "data_size": 65536 00:12:18.139 } 00:12:18.139 ] 00:12:18.139 }' 00:12:18.139 15:20:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:18.397 15:20:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:12:18.397 15:20:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:18.397 15:20:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:12:18.397 15:20:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:12:18.397 15:20:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:18.397 15:20:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:18.397 15:20:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:18.397 15:20:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:18.397 15:20:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:18.397 15:20:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:18.397 15:20:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:18.397 15:20:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.397 15:20:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.397 15:20:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.397 15:20:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:18.397 "name": "raid_bdev1", 00:12:18.397 "uuid": "b09e3000-fe2a-4a50-8aa9-a41606ad7c95", 00:12:18.397 "strip_size_kb": 0, 00:12:18.397 "state": "online", 00:12:18.397 "raid_level": "raid1", 00:12:18.397 "superblock": false, 00:12:18.397 "num_base_bdevs": 2, 00:12:18.397 "num_base_bdevs_discovered": 2, 00:12:18.397 "num_base_bdevs_operational": 2, 00:12:18.397 "base_bdevs_list": [ 00:12:18.397 { 00:12:18.397 "name": "spare", 00:12:18.397 "uuid": "3cb2f7f0-0364-5b45-8d94-e87dfcb079b6", 00:12:18.397 "is_configured": true, 00:12:18.397 "data_offset": 0, 00:12:18.397 "data_size": 65536 00:12:18.397 }, 00:12:18.397 { 00:12:18.397 "name": "BaseBdev2", 00:12:18.397 "uuid": "40aeaabc-b190-5bde-be61-5d91c514e1ce", 00:12:18.397 "is_configured": true, 00:12:18.397 "data_offset": 0, 00:12:18.397 "data_size": 65536 00:12:18.397 } 00:12:18.397 ] 00:12:18.397 }' 00:12:18.397 15:20:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:18.397 15:20:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:18.397 15:20:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:18.397 15:20:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:18.397 15:20:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:18.397 15:20:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:18.397 15:20:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:18.397 15:20:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:18.397 15:20:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:18.397 15:20:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:18.397 15:20:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:18.397 15:20:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:18.397 15:20:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:18.397 15:20:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:18.397 15:20:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:18.397 15:20:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:18.397 15:20:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.397 15:20:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.397 15:20:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.397 15:20:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:18.397 "name": "raid_bdev1", 00:12:18.397 "uuid": "b09e3000-fe2a-4a50-8aa9-a41606ad7c95", 00:12:18.397 "strip_size_kb": 0, 00:12:18.397 "state": "online", 00:12:18.397 "raid_level": "raid1", 00:12:18.398 "superblock": false, 00:12:18.398 "num_base_bdevs": 2, 00:12:18.398 "num_base_bdevs_discovered": 2, 00:12:18.398 "num_base_bdevs_operational": 2, 00:12:18.398 "base_bdevs_list": [ 00:12:18.398 { 00:12:18.398 "name": "spare", 00:12:18.398 "uuid": "3cb2f7f0-0364-5b45-8d94-e87dfcb079b6", 00:12:18.398 "is_configured": true, 00:12:18.398 "data_offset": 0, 00:12:18.398 "data_size": 65536 00:12:18.398 }, 00:12:18.398 { 00:12:18.398 "name": "BaseBdev2", 00:12:18.398 "uuid": "40aeaabc-b190-5bde-be61-5d91c514e1ce", 00:12:18.398 "is_configured": true, 00:12:18.398 "data_offset": 0, 00:12:18.398 "data_size": 65536 00:12:18.398 } 00:12:18.398 ] 00:12:18.398 }' 00:12:18.398 15:20:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:18.398 15:20:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.963 15:20:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:18.963 15:20:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.963 15:20:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.963 [2024-11-20 15:20:05.238529] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:18.963 [2024-11-20 15:20:05.238563] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:18.963 [2024-11-20 15:20:05.238647] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:18.963 [2024-11-20 15:20:05.238728] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:18.963 [2024-11-20 15:20:05.238740] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:12:18.963 15:20:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.963 15:20:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:18.963 15:20:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:12:18.963 15:20:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.963 15:20:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.963 15:20:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.963 15:20:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:12:18.963 15:20:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:12:18.963 15:20:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:12:18.963 15:20:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:12:18.963 15:20:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:18.963 15:20:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:12:18.963 15:20:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:18.963 15:20:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:18.963 15:20:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:18.963 15:20:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:12:18.963 15:20:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:18.963 15:20:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:18.963 15:20:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:12:19.221 /dev/nbd0 00:12:19.221 15:20:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:19.221 15:20:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:19.221 15:20:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:12:19.221 15:20:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:12:19.221 15:20:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:19.221 15:20:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:19.221 15:20:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:12:19.221 15:20:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:12:19.221 15:20:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:19.221 15:20:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:19.221 15:20:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:19.221 1+0 records in 00:12:19.221 1+0 records out 00:12:19.221 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000386969 s, 10.6 MB/s 00:12:19.221 15:20:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:19.221 15:20:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:12:19.221 15:20:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:19.221 15:20:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:19.221 15:20:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:12:19.221 15:20:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:19.221 15:20:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:19.221 15:20:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:12:19.480 /dev/nbd1 00:12:19.480 15:20:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:19.480 15:20:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:19.480 15:20:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:12:19.480 15:20:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:12:19.480 15:20:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:19.480 15:20:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:19.480 15:20:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:12:19.480 15:20:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:12:19.480 15:20:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:19.480 15:20:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:19.480 15:20:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:19.480 1+0 records in 00:12:19.480 1+0 records out 00:12:19.480 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000308332 s, 13.3 MB/s 00:12:19.480 15:20:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:19.480 15:20:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:12:19.480 15:20:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:19.480 15:20:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:19.480 15:20:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:12:19.480 15:20:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:19.480 15:20:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:19.480 15:20:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:12:19.738 15:20:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:12:19.738 15:20:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:19.738 15:20:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:19.738 15:20:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:19.738 15:20:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:12:19.738 15:20:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:19.738 15:20:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:19.997 15:20:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:19.997 15:20:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:19.997 15:20:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:19.997 15:20:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:19.997 15:20:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:19.997 15:20:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:19.997 15:20:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:12:19.997 15:20:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:12:19.997 15:20:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:19.997 15:20:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:12:20.255 15:20:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:20.255 15:20:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:20.255 15:20:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:20.255 15:20:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:20.255 15:20:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:20.255 15:20:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:20.255 15:20:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:12:20.255 15:20:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:12:20.255 15:20:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:12:20.255 15:20:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 75118 00:12:20.255 15:20:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 75118 ']' 00:12:20.255 15:20:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 75118 00:12:20.255 15:20:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:12:20.255 15:20:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:20.255 15:20:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75118 00:12:20.255 killing process with pid 75118 00:12:20.255 Received shutdown signal, test time was about 60.000000 seconds 00:12:20.255 00:12:20.255 Latency(us) 00:12:20.255 [2024-11-20T15:20:06.737Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:20.255 [2024-11-20T15:20:06.737Z] =================================================================================================================== 00:12:20.255 [2024-11-20T15:20:06.737Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:12:20.255 15:20:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:20.255 15:20:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:20.255 15:20:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75118' 00:12:20.255 15:20:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@973 -- # kill 75118 00:12:20.255 15:20:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@978 -- # wait 75118 00:12:20.255 [2024-11-20 15:20:06.532886] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:20.514 [2024-11-20 15:20:06.838125] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:21.902 15:20:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:12:21.902 00:12:21.902 real 0m15.187s 00:12:21.902 user 0m17.266s 00:12:21.902 sys 0m3.204s 00:12:21.902 ************************************ 00:12:21.902 END TEST raid_rebuild_test 00:12:21.902 ************************************ 00:12:21.902 15:20:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:21.902 15:20:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.903 15:20:08 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 2 true false true 00:12:21.903 15:20:08 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:12:21.903 15:20:08 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:21.903 15:20:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:21.903 ************************************ 00:12:21.903 START TEST raid_rebuild_test_sb 00:12:21.903 ************************************ 00:12:21.903 15:20:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:12:21.903 15:20:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:12:21.903 15:20:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:12:21.903 15:20:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:12:21.903 15:20:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:12:21.903 15:20:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:12:21.903 15:20:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:12:21.903 15:20:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:21.903 15:20:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:12:21.903 15:20:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:21.903 15:20:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:21.903 15:20:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:12:21.903 15:20:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:21.903 15:20:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:21.903 15:20:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:12:21.903 15:20:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:12:21.903 15:20:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:12:21.903 15:20:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:12:21.903 15:20:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:12:21.903 15:20:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:12:21.903 15:20:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:12:21.903 15:20:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:12:21.903 15:20:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:12:21.903 15:20:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:12:21.903 15:20:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:12:21.903 15:20:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=75531 00:12:21.903 15:20:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 75531 00:12:21.903 15:20:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 75531 ']' 00:12:21.903 15:20:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:21.903 15:20:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:12:21.903 15:20:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:21.903 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:21.903 15:20:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:21.903 15:20:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:21.903 15:20:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.903 [2024-11-20 15:20:08.146630] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:12:21.903 [2024-11-20 15:20:08.146921] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:12:21.903 Zero copy mechanism will not be used. 00:12:21.903 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75531 ] 00:12:21.903 [2024-11-20 15:20:08.328822] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:22.162 [2024-11-20 15:20:08.445184] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:22.422 [2024-11-20 15:20:08.654136] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:22.422 [2024-11-20 15:20:08.654410] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:22.682 15:20:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:22.682 15:20:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:12:22.682 15:20:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:22.683 15:20:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:22.683 15:20:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.683 15:20:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.683 BaseBdev1_malloc 00:12:22.683 15:20:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.683 15:20:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:22.683 15:20:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.683 15:20:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.683 [2024-11-20 15:20:09.033158] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:22.683 [2024-11-20 15:20:09.033225] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:22.683 [2024-11-20 15:20:09.033248] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:22.683 [2024-11-20 15:20:09.033263] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:22.683 [2024-11-20 15:20:09.035627] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:22.683 [2024-11-20 15:20:09.035688] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:22.683 BaseBdev1 00:12:22.683 15:20:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.683 15:20:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:22.683 15:20:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:22.683 15:20:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.683 15:20:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.683 BaseBdev2_malloc 00:12:22.683 15:20:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.683 15:20:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:12:22.683 15:20:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.683 15:20:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.683 [2024-11-20 15:20:09.092691] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:12:22.683 [2024-11-20 15:20:09.092757] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:22.683 [2024-11-20 15:20:09.092782] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:22.683 [2024-11-20 15:20:09.092797] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:22.683 [2024-11-20 15:20:09.095189] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:22.683 [2024-11-20 15:20:09.095235] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:22.683 BaseBdev2 00:12:22.683 15:20:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.683 15:20:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:12:22.683 15:20:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.683 15:20:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.683 spare_malloc 00:12:22.683 15:20:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.683 15:20:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:12:22.683 15:20:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.683 15:20:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.941 spare_delay 00:12:22.941 15:20:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.941 15:20:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:22.941 15:20:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.941 15:20:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.941 [2024-11-20 15:20:09.173368] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:22.941 [2024-11-20 15:20:09.173432] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:22.941 [2024-11-20 15:20:09.173452] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:12:22.941 [2024-11-20 15:20:09.173467] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:22.941 [2024-11-20 15:20:09.175830] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:22.941 [2024-11-20 15:20:09.175874] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:22.941 spare 00:12:22.941 15:20:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.941 15:20:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:12:22.941 15:20:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.941 15:20:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.941 [2024-11-20 15:20:09.185415] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:22.941 [2024-11-20 15:20:09.187433] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:22.941 [2024-11-20 15:20:09.187735] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:22.941 [2024-11-20 15:20:09.187758] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:22.941 [2024-11-20 15:20:09.188008] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:22.941 [2024-11-20 15:20:09.188159] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:22.941 [2024-11-20 15:20:09.188169] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:22.941 [2024-11-20 15:20:09.188306] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:22.941 15:20:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.941 15:20:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:22.941 15:20:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:22.941 15:20:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:22.941 15:20:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:22.941 15:20:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:22.941 15:20:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:22.941 15:20:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:22.941 15:20:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:22.941 15:20:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:22.941 15:20:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:22.941 15:20:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:22.941 15:20:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.941 15:20:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.941 15:20:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:22.941 15:20:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.941 15:20:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:22.941 "name": "raid_bdev1", 00:12:22.941 "uuid": "e91c1882-c237-4fd5-9c1a-72b721e2f96f", 00:12:22.941 "strip_size_kb": 0, 00:12:22.941 "state": "online", 00:12:22.941 "raid_level": "raid1", 00:12:22.941 "superblock": true, 00:12:22.941 "num_base_bdevs": 2, 00:12:22.941 "num_base_bdevs_discovered": 2, 00:12:22.941 "num_base_bdevs_operational": 2, 00:12:22.941 "base_bdevs_list": [ 00:12:22.941 { 00:12:22.941 "name": "BaseBdev1", 00:12:22.941 "uuid": "30b51ba0-1d28-5b12-9b05-a88ae495b412", 00:12:22.941 "is_configured": true, 00:12:22.941 "data_offset": 2048, 00:12:22.941 "data_size": 63488 00:12:22.941 }, 00:12:22.941 { 00:12:22.941 "name": "BaseBdev2", 00:12:22.941 "uuid": "5beb29f1-9b49-569d-94cc-196cad3345d3", 00:12:22.941 "is_configured": true, 00:12:22.941 "data_offset": 2048, 00:12:22.941 "data_size": 63488 00:12:22.941 } 00:12:22.941 ] 00:12:22.941 }' 00:12:22.941 15:20:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:22.941 15:20:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:23.201 15:20:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:23.201 15:20:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:12:23.201 15:20:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.201 15:20:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:23.201 [2024-11-20 15:20:09.637121] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:23.201 15:20:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.201 15:20:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:12:23.201 15:20:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:12:23.201 15:20:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:23.201 15:20:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.201 15:20:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:23.460 15:20:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.460 15:20:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:12:23.460 15:20:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:12:23.460 15:20:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:12:23.460 15:20:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:12:23.460 15:20:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:12:23.460 15:20:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:23.460 15:20:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:12:23.460 15:20:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:23.460 15:20:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:12:23.460 15:20:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:23.460 15:20:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:12:23.460 15:20:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:23.460 15:20:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:23.460 15:20:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:12:23.460 [2024-11-20 15:20:09.888507] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:12:23.460 /dev/nbd0 00:12:23.460 15:20:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:23.460 15:20:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:23.460 15:20:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:12:23.460 15:20:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:12:23.460 15:20:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:23.460 15:20:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:23.461 15:20:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:12:23.720 15:20:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:12:23.720 15:20:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:23.720 15:20:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:23.720 15:20:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:23.720 1+0 records in 00:12:23.720 1+0 records out 00:12:23.720 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000378061 s, 10.8 MB/s 00:12:23.720 15:20:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:23.720 15:20:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:12:23.720 15:20:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:23.720 15:20:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:23.720 15:20:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:12:23.720 15:20:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:23.720 15:20:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:23.720 15:20:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:12:23.720 15:20:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:12:23.720 15:20:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:12:29.026 63488+0 records in 00:12:29.026 63488+0 records out 00:12:29.026 32505856 bytes (33 MB, 31 MiB) copied, 4.65003 s, 7.0 MB/s 00:12:29.026 15:20:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:12:29.026 15:20:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:29.026 15:20:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:29.026 15:20:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:29.026 15:20:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:12:29.026 15:20:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:29.026 15:20:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:29.026 [2024-11-20 15:20:14.820702] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:29.026 15:20:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:29.026 15:20:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:29.026 15:20:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:29.026 15:20:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:29.027 15:20:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:29.027 15:20:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:29.027 15:20:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:12:29.027 15:20:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:12:29.027 15:20:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:12:29.027 15:20:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.027 15:20:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:29.027 [2024-11-20 15:20:14.852757] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:29.027 15:20:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.027 15:20:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:29.027 15:20:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:29.027 15:20:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:29.027 15:20:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:29.027 15:20:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:29.027 15:20:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:29.027 15:20:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:29.027 15:20:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:29.027 15:20:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:29.027 15:20:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:29.027 15:20:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:29.027 15:20:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.027 15:20:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:29.027 15:20:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:29.027 15:20:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.027 15:20:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:29.027 "name": "raid_bdev1", 00:12:29.027 "uuid": "e91c1882-c237-4fd5-9c1a-72b721e2f96f", 00:12:29.027 "strip_size_kb": 0, 00:12:29.027 "state": "online", 00:12:29.027 "raid_level": "raid1", 00:12:29.027 "superblock": true, 00:12:29.027 "num_base_bdevs": 2, 00:12:29.027 "num_base_bdevs_discovered": 1, 00:12:29.027 "num_base_bdevs_operational": 1, 00:12:29.027 "base_bdevs_list": [ 00:12:29.027 { 00:12:29.027 "name": null, 00:12:29.027 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:29.027 "is_configured": false, 00:12:29.027 "data_offset": 0, 00:12:29.027 "data_size": 63488 00:12:29.027 }, 00:12:29.027 { 00:12:29.027 "name": "BaseBdev2", 00:12:29.027 "uuid": "5beb29f1-9b49-569d-94cc-196cad3345d3", 00:12:29.027 "is_configured": true, 00:12:29.027 "data_offset": 2048, 00:12:29.027 "data_size": 63488 00:12:29.027 } 00:12:29.027 ] 00:12:29.027 }' 00:12:29.027 15:20:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:29.027 15:20:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:29.027 15:20:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:29.027 15:20:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.027 15:20:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:29.027 [2024-11-20 15:20:15.304107] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:29.027 [2024-11-20 15:20:15.321715] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3360 00:12:29.027 15:20:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.027 15:20:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:12:29.027 [2024-11-20 15:20:15.323963] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:29.978 15:20:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:29.978 15:20:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:29.978 15:20:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:29.978 15:20:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:29.978 15:20:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:29.978 15:20:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:29.978 15:20:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:29.978 15:20:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.978 15:20:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:29.978 15:20:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.978 15:20:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:29.978 "name": "raid_bdev1", 00:12:29.978 "uuid": "e91c1882-c237-4fd5-9c1a-72b721e2f96f", 00:12:29.978 "strip_size_kb": 0, 00:12:29.978 "state": "online", 00:12:29.978 "raid_level": "raid1", 00:12:29.978 "superblock": true, 00:12:29.978 "num_base_bdevs": 2, 00:12:29.978 "num_base_bdevs_discovered": 2, 00:12:29.978 "num_base_bdevs_operational": 2, 00:12:29.978 "process": { 00:12:29.978 "type": "rebuild", 00:12:29.978 "target": "spare", 00:12:29.978 "progress": { 00:12:29.978 "blocks": 20480, 00:12:29.978 "percent": 32 00:12:29.978 } 00:12:29.978 }, 00:12:29.978 "base_bdevs_list": [ 00:12:29.978 { 00:12:29.978 "name": "spare", 00:12:29.978 "uuid": "f1c4f87a-a032-5a64-b9c9-a84c0dab11c4", 00:12:29.978 "is_configured": true, 00:12:29.978 "data_offset": 2048, 00:12:29.978 "data_size": 63488 00:12:29.978 }, 00:12:29.978 { 00:12:29.978 "name": "BaseBdev2", 00:12:29.978 "uuid": "5beb29f1-9b49-569d-94cc-196cad3345d3", 00:12:29.978 "is_configured": true, 00:12:29.978 "data_offset": 2048, 00:12:29.978 "data_size": 63488 00:12:29.978 } 00:12:29.978 ] 00:12:29.978 }' 00:12:29.978 15:20:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:29.978 15:20:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:29.978 15:20:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:30.238 15:20:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:30.238 15:20:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:30.238 15:20:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.238 15:20:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:30.238 [2024-11-20 15:20:16.467842] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:30.238 [2024-11-20 15:20:16.529370] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:30.238 [2024-11-20 15:20:16.529454] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:30.238 [2024-11-20 15:20:16.529471] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:30.238 [2024-11-20 15:20:16.529482] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:30.238 15:20:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.238 15:20:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:30.238 15:20:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:30.238 15:20:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:30.238 15:20:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:30.238 15:20:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:30.238 15:20:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:30.238 15:20:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:30.238 15:20:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:30.238 15:20:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:30.238 15:20:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:30.238 15:20:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:30.238 15:20:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:30.238 15:20:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.238 15:20:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:30.238 15:20:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.238 15:20:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:30.238 "name": "raid_bdev1", 00:12:30.238 "uuid": "e91c1882-c237-4fd5-9c1a-72b721e2f96f", 00:12:30.238 "strip_size_kb": 0, 00:12:30.238 "state": "online", 00:12:30.238 "raid_level": "raid1", 00:12:30.238 "superblock": true, 00:12:30.238 "num_base_bdevs": 2, 00:12:30.238 "num_base_bdevs_discovered": 1, 00:12:30.238 "num_base_bdevs_operational": 1, 00:12:30.238 "base_bdevs_list": [ 00:12:30.238 { 00:12:30.238 "name": null, 00:12:30.238 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:30.238 "is_configured": false, 00:12:30.238 "data_offset": 0, 00:12:30.238 "data_size": 63488 00:12:30.238 }, 00:12:30.238 { 00:12:30.238 "name": "BaseBdev2", 00:12:30.238 "uuid": "5beb29f1-9b49-569d-94cc-196cad3345d3", 00:12:30.238 "is_configured": true, 00:12:30.238 "data_offset": 2048, 00:12:30.238 "data_size": 63488 00:12:30.238 } 00:12:30.239 ] 00:12:30.239 }' 00:12:30.239 15:20:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:30.239 15:20:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:30.499 15:20:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:30.499 15:20:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:30.499 15:20:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:30.499 15:20:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:30.499 15:20:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:30.499 15:20:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:30.499 15:20:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.499 15:20:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:30.499 15:20:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:30.499 15:20:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.758 15:20:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:30.758 "name": "raid_bdev1", 00:12:30.758 "uuid": "e91c1882-c237-4fd5-9c1a-72b721e2f96f", 00:12:30.758 "strip_size_kb": 0, 00:12:30.758 "state": "online", 00:12:30.758 "raid_level": "raid1", 00:12:30.758 "superblock": true, 00:12:30.758 "num_base_bdevs": 2, 00:12:30.758 "num_base_bdevs_discovered": 1, 00:12:30.758 "num_base_bdevs_operational": 1, 00:12:30.758 "base_bdevs_list": [ 00:12:30.758 { 00:12:30.758 "name": null, 00:12:30.758 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:30.758 "is_configured": false, 00:12:30.758 "data_offset": 0, 00:12:30.758 "data_size": 63488 00:12:30.758 }, 00:12:30.758 { 00:12:30.758 "name": "BaseBdev2", 00:12:30.758 "uuid": "5beb29f1-9b49-569d-94cc-196cad3345d3", 00:12:30.758 "is_configured": true, 00:12:30.758 "data_offset": 2048, 00:12:30.758 "data_size": 63488 00:12:30.758 } 00:12:30.758 ] 00:12:30.758 }' 00:12:30.758 15:20:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:30.758 15:20:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:30.758 15:20:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:30.758 15:20:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:30.758 15:20:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:30.758 15:20:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.758 15:20:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:30.758 [2024-11-20 15:20:17.082586] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:30.758 [2024-11-20 15:20:17.099120] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3430 00:12:30.758 15:20:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.758 15:20:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:12:30.758 [2024-11-20 15:20:17.101210] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:31.695 15:20:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:31.695 15:20:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:31.695 15:20:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:31.695 15:20:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:31.695 15:20:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:31.695 15:20:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:31.695 15:20:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.695 15:20:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:31.695 15:20:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:31.695 15:20:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.695 15:20:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:31.695 "name": "raid_bdev1", 00:12:31.695 "uuid": "e91c1882-c237-4fd5-9c1a-72b721e2f96f", 00:12:31.695 "strip_size_kb": 0, 00:12:31.695 "state": "online", 00:12:31.695 "raid_level": "raid1", 00:12:31.695 "superblock": true, 00:12:31.695 "num_base_bdevs": 2, 00:12:31.695 "num_base_bdevs_discovered": 2, 00:12:31.695 "num_base_bdevs_operational": 2, 00:12:31.695 "process": { 00:12:31.695 "type": "rebuild", 00:12:31.695 "target": "spare", 00:12:31.695 "progress": { 00:12:31.695 "blocks": 20480, 00:12:31.695 "percent": 32 00:12:31.695 } 00:12:31.695 }, 00:12:31.695 "base_bdevs_list": [ 00:12:31.695 { 00:12:31.695 "name": "spare", 00:12:31.696 "uuid": "f1c4f87a-a032-5a64-b9c9-a84c0dab11c4", 00:12:31.696 "is_configured": true, 00:12:31.696 "data_offset": 2048, 00:12:31.696 "data_size": 63488 00:12:31.696 }, 00:12:31.696 { 00:12:31.696 "name": "BaseBdev2", 00:12:31.696 "uuid": "5beb29f1-9b49-569d-94cc-196cad3345d3", 00:12:31.696 "is_configured": true, 00:12:31.696 "data_offset": 2048, 00:12:31.696 "data_size": 63488 00:12:31.696 } 00:12:31.696 ] 00:12:31.696 }' 00:12:31.696 15:20:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:31.955 15:20:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:31.955 15:20:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:31.955 15:20:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:31.955 15:20:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:12:31.955 15:20:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:12:31.955 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:12:31.955 15:20:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:12:31.955 15:20:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:12:31.955 15:20:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:12:31.955 15:20:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=382 00:12:31.955 15:20:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:31.955 15:20:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:31.955 15:20:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:31.955 15:20:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:31.955 15:20:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:31.955 15:20:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:31.955 15:20:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:31.955 15:20:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:31.955 15:20:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.955 15:20:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:31.955 15:20:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.955 15:20:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:31.955 "name": "raid_bdev1", 00:12:31.955 "uuid": "e91c1882-c237-4fd5-9c1a-72b721e2f96f", 00:12:31.955 "strip_size_kb": 0, 00:12:31.955 "state": "online", 00:12:31.955 "raid_level": "raid1", 00:12:31.955 "superblock": true, 00:12:31.955 "num_base_bdevs": 2, 00:12:31.955 "num_base_bdevs_discovered": 2, 00:12:31.955 "num_base_bdevs_operational": 2, 00:12:31.955 "process": { 00:12:31.955 "type": "rebuild", 00:12:31.955 "target": "spare", 00:12:31.955 "progress": { 00:12:31.955 "blocks": 22528, 00:12:31.955 "percent": 35 00:12:31.955 } 00:12:31.955 }, 00:12:31.955 "base_bdevs_list": [ 00:12:31.955 { 00:12:31.955 "name": "spare", 00:12:31.955 "uuid": "f1c4f87a-a032-5a64-b9c9-a84c0dab11c4", 00:12:31.955 "is_configured": true, 00:12:31.955 "data_offset": 2048, 00:12:31.955 "data_size": 63488 00:12:31.955 }, 00:12:31.955 { 00:12:31.955 "name": "BaseBdev2", 00:12:31.955 "uuid": "5beb29f1-9b49-569d-94cc-196cad3345d3", 00:12:31.955 "is_configured": true, 00:12:31.955 "data_offset": 2048, 00:12:31.955 "data_size": 63488 00:12:31.955 } 00:12:31.955 ] 00:12:31.955 }' 00:12:31.955 15:20:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:31.955 15:20:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:31.955 15:20:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:31.955 15:20:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:31.955 15:20:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:33.368 15:20:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:33.368 15:20:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:33.368 15:20:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:33.368 15:20:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:33.368 15:20:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:33.368 15:20:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:33.368 15:20:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:33.368 15:20:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:33.368 15:20:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.368 15:20:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:33.368 15:20:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.368 15:20:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:33.368 "name": "raid_bdev1", 00:12:33.368 "uuid": "e91c1882-c237-4fd5-9c1a-72b721e2f96f", 00:12:33.368 "strip_size_kb": 0, 00:12:33.368 "state": "online", 00:12:33.368 "raid_level": "raid1", 00:12:33.368 "superblock": true, 00:12:33.368 "num_base_bdevs": 2, 00:12:33.368 "num_base_bdevs_discovered": 2, 00:12:33.368 "num_base_bdevs_operational": 2, 00:12:33.368 "process": { 00:12:33.368 "type": "rebuild", 00:12:33.368 "target": "spare", 00:12:33.368 "progress": { 00:12:33.368 "blocks": 45056, 00:12:33.368 "percent": 70 00:12:33.368 } 00:12:33.368 }, 00:12:33.368 "base_bdevs_list": [ 00:12:33.368 { 00:12:33.368 "name": "spare", 00:12:33.368 "uuid": "f1c4f87a-a032-5a64-b9c9-a84c0dab11c4", 00:12:33.368 "is_configured": true, 00:12:33.368 "data_offset": 2048, 00:12:33.368 "data_size": 63488 00:12:33.368 }, 00:12:33.368 { 00:12:33.368 "name": "BaseBdev2", 00:12:33.368 "uuid": "5beb29f1-9b49-569d-94cc-196cad3345d3", 00:12:33.368 "is_configured": true, 00:12:33.368 "data_offset": 2048, 00:12:33.368 "data_size": 63488 00:12:33.368 } 00:12:33.368 ] 00:12:33.368 }' 00:12:33.368 15:20:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:33.368 15:20:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:33.368 15:20:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:33.368 15:20:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:33.368 15:20:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:33.958 [2024-11-20 15:20:20.214356] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:12:33.958 [2024-11-20 15:20:20.214678] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:12:33.958 [2024-11-20 15:20:20.214823] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:34.216 15:20:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:34.216 15:20:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:34.216 15:20:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:34.216 15:20:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:34.216 15:20:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:34.216 15:20:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:34.216 15:20:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:34.216 15:20:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.216 15:20:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:34.216 15:20:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:34.216 15:20:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.216 15:20:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:34.216 "name": "raid_bdev1", 00:12:34.216 "uuid": "e91c1882-c237-4fd5-9c1a-72b721e2f96f", 00:12:34.216 "strip_size_kb": 0, 00:12:34.216 "state": "online", 00:12:34.216 "raid_level": "raid1", 00:12:34.216 "superblock": true, 00:12:34.216 "num_base_bdevs": 2, 00:12:34.216 "num_base_bdevs_discovered": 2, 00:12:34.216 "num_base_bdevs_operational": 2, 00:12:34.216 "base_bdevs_list": [ 00:12:34.216 { 00:12:34.216 "name": "spare", 00:12:34.216 "uuid": "f1c4f87a-a032-5a64-b9c9-a84c0dab11c4", 00:12:34.216 "is_configured": true, 00:12:34.216 "data_offset": 2048, 00:12:34.216 "data_size": 63488 00:12:34.216 }, 00:12:34.216 { 00:12:34.216 "name": "BaseBdev2", 00:12:34.216 "uuid": "5beb29f1-9b49-569d-94cc-196cad3345d3", 00:12:34.216 "is_configured": true, 00:12:34.216 "data_offset": 2048, 00:12:34.216 "data_size": 63488 00:12:34.216 } 00:12:34.216 ] 00:12:34.216 }' 00:12:34.216 15:20:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:34.216 15:20:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:12:34.216 15:20:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:34.216 15:20:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:12:34.216 15:20:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:12:34.216 15:20:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:34.216 15:20:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:34.216 15:20:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:34.216 15:20:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:34.216 15:20:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:34.216 15:20:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:34.216 15:20:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.216 15:20:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:34.216 15:20:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:34.476 15:20:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.476 15:20:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:34.476 "name": "raid_bdev1", 00:12:34.476 "uuid": "e91c1882-c237-4fd5-9c1a-72b721e2f96f", 00:12:34.476 "strip_size_kb": 0, 00:12:34.476 "state": "online", 00:12:34.476 "raid_level": "raid1", 00:12:34.476 "superblock": true, 00:12:34.476 "num_base_bdevs": 2, 00:12:34.476 "num_base_bdevs_discovered": 2, 00:12:34.476 "num_base_bdevs_operational": 2, 00:12:34.476 "base_bdevs_list": [ 00:12:34.476 { 00:12:34.476 "name": "spare", 00:12:34.476 "uuid": "f1c4f87a-a032-5a64-b9c9-a84c0dab11c4", 00:12:34.476 "is_configured": true, 00:12:34.476 "data_offset": 2048, 00:12:34.476 "data_size": 63488 00:12:34.476 }, 00:12:34.476 { 00:12:34.476 "name": "BaseBdev2", 00:12:34.476 "uuid": "5beb29f1-9b49-569d-94cc-196cad3345d3", 00:12:34.476 "is_configured": true, 00:12:34.476 "data_offset": 2048, 00:12:34.476 "data_size": 63488 00:12:34.476 } 00:12:34.476 ] 00:12:34.476 }' 00:12:34.476 15:20:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:34.476 15:20:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:34.476 15:20:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:34.476 15:20:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:34.476 15:20:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:34.476 15:20:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:34.476 15:20:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:34.476 15:20:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:34.476 15:20:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:34.476 15:20:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:34.476 15:20:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:34.476 15:20:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:34.476 15:20:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:34.476 15:20:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:34.476 15:20:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:34.476 15:20:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:34.476 15:20:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.476 15:20:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:34.476 15:20:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.476 15:20:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:34.476 "name": "raid_bdev1", 00:12:34.476 "uuid": "e91c1882-c237-4fd5-9c1a-72b721e2f96f", 00:12:34.476 "strip_size_kb": 0, 00:12:34.476 "state": "online", 00:12:34.476 "raid_level": "raid1", 00:12:34.476 "superblock": true, 00:12:34.476 "num_base_bdevs": 2, 00:12:34.476 "num_base_bdevs_discovered": 2, 00:12:34.476 "num_base_bdevs_operational": 2, 00:12:34.476 "base_bdevs_list": [ 00:12:34.476 { 00:12:34.476 "name": "spare", 00:12:34.476 "uuid": "f1c4f87a-a032-5a64-b9c9-a84c0dab11c4", 00:12:34.476 "is_configured": true, 00:12:34.476 "data_offset": 2048, 00:12:34.477 "data_size": 63488 00:12:34.477 }, 00:12:34.477 { 00:12:34.477 "name": "BaseBdev2", 00:12:34.477 "uuid": "5beb29f1-9b49-569d-94cc-196cad3345d3", 00:12:34.477 "is_configured": true, 00:12:34.477 "data_offset": 2048, 00:12:34.477 "data_size": 63488 00:12:34.477 } 00:12:34.477 ] 00:12:34.477 }' 00:12:34.477 15:20:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:34.477 15:20:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:34.736 15:20:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:34.736 15:20:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.736 15:20:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:34.736 [2024-11-20 15:20:21.171298] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:34.736 [2024-11-20 15:20:21.171331] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:34.736 [2024-11-20 15:20:21.171408] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:34.736 [2024-11-20 15:20:21.171476] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:34.736 [2024-11-20 15:20:21.171491] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:12:34.736 15:20:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.736 15:20:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:34.736 15:20:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:12:34.736 15:20:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.736 15:20:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:34.736 15:20:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.996 15:20:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:12:34.996 15:20:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:12:34.996 15:20:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:12:34.996 15:20:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:12:34.996 15:20:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:34.996 15:20:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:12:34.996 15:20:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:34.996 15:20:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:34.996 15:20:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:34.996 15:20:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:12:34.996 15:20:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:34.996 15:20:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:34.996 15:20:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:12:34.996 /dev/nbd0 00:12:34.996 15:20:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:34.996 15:20:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:34.996 15:20:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:12:34.996 15:20:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:12:34.996 15:20:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:34.996 15:20:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:34.996 15:20:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:12:34.996 15:20:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:12:34.996 15:20:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:34.996 15:20:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:34.996 15:20:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:34.996 1+0 records in 00:12:34.996 1+0 records out 00:12:34.996 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00025484 s, 16.1 MB/s 00:12:34.996 15:20:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:34.996 15:20:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:12:34.996 15:20:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:34.996 15:20:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:34.996 15:20:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:12:34.996 15:20:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:34.996 15:20:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:34.996 15:20:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:12:35.255 /dev/nbd1 00:12:35.255 15:20:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:35.255 15:20:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:35.255 15:20:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:12:35.255 15:20:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:12:35.255 15:20:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:35.255 15:20:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:35.255 15:20:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:12:35.255 15:20:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:12:35.255 15:20:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:35.255 15:20:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:35.255 15:20:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:35.255 1+0 records in 00:12:35.255 1+0 records out 00:12:35.255 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000346094 s, 11.8 MB/s 00:12:35.255 15:20:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:35.255 15:20:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:12:35.255 15:20:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:35.255 15:20:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:35.255 15:20:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:12:35.255 15:20:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:35.255 15:20:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:35.255 15:20:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:12:35.514 15:20:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:12:35.514 15:20:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:35.514 15:20:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:35.514 15:20:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:35.514 15:20:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:12:35.514 15:20:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:35.514 15:20:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:35.772 15:20:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:35.772 15:20:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:35.772 15:20:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:35.772 15:20:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:35.772 15:20:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:35.772 15:20:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:35.772 15:20:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:12:35.772 15:20:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:12:35.772 15:20:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:35.772 15:20:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:12:36.030 15:20:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:36.030 15:20:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:36.030 15:20:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:36.030 15:20:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:36.030 15:20:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:36.030 15:20:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:36.030 15:20:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:12:36.030 15:20:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:12:36.030 15:20:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:12:36.030 15:20:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:12:36.030 15:20:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.030 15:20:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:36.030 15:20:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.031 15:20:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:36.031 15:20:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.031 15:20:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:36.031 [2024-11-20 15:20:22.348918] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:36.031 [2024-11-20 15:20:22.348971] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:36.031 [2024-11-20 15:20:22.348998] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:36.031 [2024-11-20 15:20:22.349010] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:36.031 [2024-11-20 15:20:22.351442] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:36.031 [2024-11-20 15:20:22.351600] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:36.031 [2024-11-20 15:20:22.351729] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:12:36.031 [2024-11-20 15:20:22.351793] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:36.031 [2024-11-20 15:20:22.351938] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:36.031 spare 00:12:36.031 15:20:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.031 15:20:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:12:36.031 15:20:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.031 15:20:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:36.031 [2024-11-20 15:20:22.451866] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:12:36.031 [2024-11-20 15:20:22.451910] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:36.031 [2024-11-20 15:20:22.452243] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1ae0 00:12:36.031 [2024-11-20 15:20:22.452446] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:12:36.031 [2024-11-20 15:20:22.452457] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:12:36.031 [2024-11-20 15:20:22.452640] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:36.031 15:20:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.031 15:20:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:36.031 15:20:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:36.031 15:20:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:36.031 15:20:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:36.031 15:20:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:36.031 15:20:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:36.031 15:20:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:36.031 15:20:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:36.031 15:20:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:36.031 15:20:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:36.031 15:20:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:36.031 15:20:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:36.031 15:20:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.031 15:20:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:36.031 15:20:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.031 15:20:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:36.031 "name": "raid_bdev1", 00:12:36.031 "uuid": "e91c1882-c237-4fd5-9c1a-72b721e2f96f", 00:12:36.031 "strip_size_kb": 0, 00:12:36.031 "state": "online", 00:12:36.031 "raid_level": "raid1", 00:12:36.031 "superblock": true, 00:12:36.031 "num_base_bdevs": 2, 00:12:36.031 "num_base_bdevs_discovered": 2, 00:12:36.031 "num_base_bdevs_operational": 2, 00:12:36.031 "base_bdevs_list": [ 00:12:36.031 { 00:12:36.031 "name": "spare", 00:12:36.031 "uuid": "f1c4f87a-a032-5a64-b9c9-a84c0dab11c4", 00:12:36.031 "is_configured": true, 00:12:36.031 "data_offset": 2048, 00:12:36.031 "data_size": 63488 00:12:36.031 }, 00:12:36.031 { 00:12:36.031 "name": "BaseBdev2", 00:12:36.031 "uuid": "5beb29f1-9b49-569d-94cc-196cad3345d3", 00:12:36.031 "is_configured": true, 00:12:36.031 "data_offset": 2048, 00:12:36.031 "data_size": 63488 00:12:36.031 } 00:12:36.031 ] 00:12:36.031 }' 00:12:36.031 15:20:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:36.031 15:20:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:36.597 15:20:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:36.597 15:20:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:36.597 15:20:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:36.597 15:20:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:36.597 15:20:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:36.597 15:20:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:36.597 15:20:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:36.597 15:20:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.597 15:20:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:36.598 15:20:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.598 15:20:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:36.598 "name": "raid_bdev1", 00:12:36.598 "uuid": "e91c1882-c237-4fd5-9c1a-72b721e2f96f", 00:12:36.598 "strip_size_kb": 0, 00:12:36.598 "state": "online", 00:12:36.598 "raid_level": "raid1", 00:12:36.598 "superblock": true, 00:12:36.598 "num_base_bdevs": 2, 00:12:36.598 "num_base_bdevs_discovered": 2, 00:12:36.598 "num_base_bdevs_operational": 2, 00:12:36.598 "base_bdevs_list": [ 00:12:36.598 { 00:12:36.598 "name": "spare", 00:12:36.598 "uuid": "f1c4f87a-a032-5a64-b9c9-a84c0dab11c4", 00:12:36.598 "is_configured": true, 00:12:36.598 "data_offset": 2048, 00:12:36.598 "data_size": 63488 00:12:36.598 }, 00:12:36.598 { 00:12:36.598 "name": "BaseBdev2", 00:12:36.598 "uuid": "5beb29f1-9b49-569d-94cc-196cad3345d3", 00:12:36.598 "is_configured": true, 00:12:36.598 "data_offset": 2048, 00:12:36.598 "data_size": 63488 00:12:36.598 } 00:12:36.598 ] 00:12:36.598 }' 00:12:36.598 15:20:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:36.598 15:20:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:36.598 15:20:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:36.598 15:20:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:36.598 15:20:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:36.598 15:20:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.598 15:20:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:36.598 15:20:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:12:36.598 15:20:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.598 15:20:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:12:36.598 15:20:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:36.598 15:20:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.598 15:20:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:36.598 [2024-11-20 15:20:22.968144] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:36.598 15:20:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.598 15:20:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:36.598 15:20:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:36.598 15:20:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:36.598 15:20:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:36.598 15:20:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:36.598 15:20:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:36.598 15:20:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:36.598 15:20:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:36.598 15:20:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:36.598 15:20:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:36.598 15:20:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:36.598 15:20:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:36.598 15:20:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.598 15:20:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:36.598 15:20:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.598 15:20:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:36.598 "name": "raid_bdev1", 00:12:36.598 "uuid": "e91c1882-c237-4fd5-9c1a-72b721e2f96f", 00:12:36.598 "strip_size_kb": 0, 00:12:36.598 "state": "online", 00:12:36.598 "raid_level": "raid1", 00:12:36.598 "superblock": true, 00:12:36.598 "num_base_bdevs": 2, 00:12:36.598 "num_base_bdevs_discovered": 1, 00:12:36.598 "num_base_bdevs_operational": 1, 00:12:36.598 "base_bdevs_list": [ 00:12:36.598 { 00:12:36.598 "name": null, 00:12:36.598 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:36.598 "is_configured": false, 00:12:36.598 "data_offset": 0, 00:12:36.598 "data_size": 63488 00:12:36.598 }, 00:12:36.598 { 00:12:36.598 "name": "BaseBdev2", 00:12:36.598 "uuid": "5beb29f1-9b49-569d-94cc-196cad3345d3", 00:12:36.598 "is_configured": true, 00:12:36.598 "data_offset": 2048, 00:12:36.598 "data_size": 63488 00:12:36.598 } 00:12:36.598 ] 00:12:36.598 }' 00:12:36.598 15:20:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:36.598 15:20:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:37.166 15:20:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:37.166 15:20:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.166 15:20:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:37.166 [2024-11-20 15:20:23.395733] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:37.166 [2024-11-20 15:20:23.395955] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:12:37.166 [2024-11-20 15:20:23.395980] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:12:37.166 [2024-11-20 15:20:23.396020] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:37.166 [2024-11-20 15:20:23.413153] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1bb0 00:12:37.166 15:20:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.166 15:20:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:12:37.166 [2024-11-20 15:20:23.415403] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:38.102 15:20:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:38.102 15:20:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:38.102 15:20:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:38.102 15:20:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:38.102 15:20:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:38.102 15:20:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:38.102 15:20:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:38.102 15:20:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.102 15:20:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:38.102 15:20:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.102 15:20:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:38.102 "name": "raid_bdev1", 00:12:38.102 "uuid": "e91c1882-c237-4fd5-9c1a-72b721e2f96f", 00:12:38.102 "strip_size_kb": 0, 00:12:38.102 "state": "online", 00:12:38.102 "raid_level": "raid1", 00:12:38.102 "superblock": true, 00:12:38.102 "num_base_bdevs": 2, 00:12:38.102 "num_base_bdevs_discovered": 2, 00:12:38.102 "num_base_bdevs_operational": 2, 00:12:38.102 "process": { 00:12:38.102 "type": "rebuild", 00:12:38.102 "target": "spare", 00:12:38.102 "progress": { 00:12:38.102 "blocks": 20480, 00:12:38.102 "percent": 32 00:12:38.102 } 00:12:38.102 }, 00:12:38.102 "base_bdevs_list": [ 00:12:38.102 { 00:12:38.102 "name": "spare", 00:12:38.102 "uuid": "f1c4f87a-a032-5a64-b9c9-a84c0dab11c4", 00:12:38.102 "is_configured": true, 00:12:38.102 "data_offset": 2048, 00:12:38.102 "data_size": 63488 00:12:38.102 }, 00:12:38.102 { 00:12:38.102 "name": "BaseBdev2", 00:12:38.102 "uuid": "5beb29f1-9b49-569d-94cc-196cad3345d3", 00:12:38.102 "is_configured": true, 00:12:38.102 "data_offset": 2048, 00:12:38.102 "data_size": 63488 00:12:38.102 } 00:12:38.103 ] 00:12:38.103 }' 00:12:38.103 15:20:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:38.103 15:20:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:38.103 15:20:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:38.103 15:20:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:38.103 15:20:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:12:38.103 15:20:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.103 15:20:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:38.103 [2024-11-20 15:20:24.559567] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:38.361 [2024-11-20 15:20:24.620396] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:38.361 [2024-11-20 15:20:24.620466] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:38.362 [2024-11-20 15:20:24.620482] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:38.362 [2024-11-20 15:20:24.620493] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:38.362 15:20:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.362 15:20:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:38.362 15:20:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:38.362 15:20:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:38.362 15:20:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:38.362 15:20:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:38.362 15:20:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:38.362 15:20:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:38.362 15:20:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:38.362 15:20:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:38.362 15:20:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:38.362 15:20:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:38.362 15:20:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:38.362 15:20:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.362 15:20:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:38.362 15:20:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.362 15:20:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:38.362 "name": "raid_bdev1", 00:12:38.362 "uuid": "e91c1882-c237-4fd5-9c1a-72b721e2f96f", 00:12:38.362 "strip_size_kb": 0, 00:12:38.362 "state": "online", 00:12:38.362 "raid_level": "raid1", 00:12:38.362 "superblock": true, 00:12:38.362 "num_base_bdevs": 2, 00:12:38.362 "num_base_bdevs_discovered": 1, 00:12:38.362 "num_base_bdevs_operational": 1, 00:12:38.362 "base_bdevs_list": [ 00:12:38.362 { 00:12:38.362 "name": null, 00:12:38.362 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:38.362 "is_configured": false, 00:12:38.362 "data_offset": 0, 00:12:38.362 "data_size": 63488 00:12:38.362 }, 00:12:38.362 { 00:12:38.362 "name": "BaseBdev2", 00:12:38.362 "uuid": "5beb29f1-9b49-569d-94cc-196cad3345d3", 00:12:38.362 "is_configured": true, 00:12:38.362 "data_offset": 2048, 00:12:38.362 "data_size": 63488 00:12:38.362 } 00:12:38.362 ] 00:12:38.362 }' 00:12:38.362 15:20:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:38.362 15:20:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:38.621 15:20:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:38.621 15:20:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.621 15:20:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:38.621 [2024-11-20 15:20:25.060203] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:38.621 [2024-11-20 15:20:25.060278] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:38.621 [2024-11-20 15:20:25.060302] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:12:38.621 [2024-11-20 15:20:25.060316] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:38.621 [2024-11-20 15:20:25.060804] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:38.621 [2024-11-20 15:20:25.060838] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:38.621 [2024-11-20 15:20:25.060934] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:12:38.621 [2024-11-20 15:20:25.060951] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:12:38.621 [2024-11-20 15:20:25.060963] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:12:38.621 [2024-11-20 15:20:25.060992] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:38.621 [2024-11-20 15:20:25.077152] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:12:38.621 spare 00:12:38.621 15:20:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.621 15:20:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:12:38.621 [2024-11-20 15:20:25.079256] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:40.002 15:20:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:40.002 15:20:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:40.002 15:20:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:40.002 15:20:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:40.002 15:20:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:40.002 15:20:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:40.002 15:20:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:40.002 15:20:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.002 15:20:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:40.002 15:20:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.002 15:20:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:40.002 "name": "raid_bdev1", 00:12:40.002 "uuid": "e91c1882-c237-4fd5-9c1a-72b721e2f96f", 00:12:40.002 "strip_size_kb": 0, 00:12:40.002 "state": "online", 00:12:40.002 "raid_level": "raid1", 00:12:40.002 "superblock": true, 00:12:40.002 "num_base_bdevs": 2, 00:12:40.002 "num_base_bdevs_discovered": 2, 00:12:40.002 "num_base_bdevs_operational": 2, 00:12:40.002 "process": { 00:12:40.002 "type": "rebuild", 00:12:40.002 "target": "spare", 00:12:40.002 "progress": { 00:12:40.002 "blocks": 20480, 00:12:40.002 "percent": 32 00:12:40.002 } 00:12:40.002 }, 00:12:40.002 "base_bdevs_list": [ 00:12:40.002 { 00:12:40.002 "name": "spare", 00:12:40.002 "uuid": "f1c4f87a-a032-5a64-b9c9-a84c0dab11c4", 00:12:40.002 "is_configured": true, 00:12:40.002 "data_offset": 2048, 00:12:40.002 "data_size": 63488 00:12:40.002 }, 00:12:40.002 { 00:12:40.002 "name": "BaseBdev2", 00:12:40.002 "uuid": "5beb29f1-9b49-569d-94cc-196cad3345d3", 00:12:40.002 "is_configured": true, 00:12:40.002 "data_offset": 2048, 00:12:40.002 "data_size": 63488 00:12:40.002 } 00:12:40.002 ] 00:12:40.002 }' 00:12:40.002 15:20:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:40.002 15:20:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:40.002 15:20:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:40.002 15:20:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:40.002 15:20:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:12:40.002 15:20:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.002 15:20:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:40.002 [2024-11-20 15:20:26.231168] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:40.002 [2024-11-20 15:20:26.284701] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:40.002 [2024-11-20 15:20:26.284775] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:40.002 [2024-11-20 15:20:26.284796] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:40.002 [2024-11-20 15:20:26.284804] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:40.002 15:20:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.002 15:20:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:40.002 15:20:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:40.002 15:20:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:40.002 15:20:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:40.002 15:20:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:40.002 15:20:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:40.002 15:20:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:40.002 15:20:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:40.002 15:20:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:40.002 15:20:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:40.002 15:20:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:40.002 15:20:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:40.002 15:20:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.002 15:20:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:40.002 15:20:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.002 15:20:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:40.002 "name": "raid_bdev1", 00:12:40.002 "uuid": "e91c1882-c237-4fd5-9c1a-72b721e2f96f", 00:12:40.002 "strip_size_kb": 0, 00:12:40.002 "state": "online", 00:12:40.002 "raid_level": "raid1", 00:12:40.002 "superblock": true, 00:12:40.002 "num_base_bdevs": 2, 00:12:40.002 "num_base_bdevs_discovered": 1, 00:12:40.002 "num_base_bdevs_operational": 1, 00:12:40.002 "base_bdevs_list": [ 00:12:40.002 { 00:12:40.002 "name": null, 00:12:40.002 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:40.002 "is_configured": false, 00:12:40.002 "data_offset": 0, 00:12:40.002 "data_size": 63488 00:12:40.002 }, 00:12:40.002 { 00:12:40.002 "name": "BaseBdev2", 00:12:40.002 "uuid": "5beb29f1-9b49-569d-94cc-196cad3345d3", 00:12:40.002 "is_configured": true, 00:12:40.002 "data_offset": 2048, 00:12:40.002 "data_size": 63488 00:12:40.002 } 00:12:40.002 ] 00:12:40.002 }' 00:12:40.002 15:20:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:40.002 15:20:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:40.571 15:20:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:40.571 15:20:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:40.571 15:20:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:40.571 15:20:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:40.571 15:20:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:40.571 15:20:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:40.571 15:20:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.571 15:20:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:40.571 15:20:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:40.571 15:20:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.571 15:20:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:40.571 "name": "raid_bdev1", 00:12:40.571 "uuid": "e91c1882-c237-4fd5-9c1a-72b721e2f96f", 00:12:40.571 "strip_size_kb": 0, 00:12:40.571 "state": "online", 00:12:40.571 "raid_level": "raid1", 00:12:40.571 "superblock": true, 00:12:40.571 "num_base_bdevs": 2, 00:12:40.571 "num_base_bdevs_discovered": 1, 00:12:40.571 "num_base_bdevs_operational": 1, 00:12:40.571 "base_bdevs_list": [ 00:12:40.571 { 00:12:40.571 "name": null, 00:12:40.571 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:40.571 "is_configured": false, 00:12:40.571 "data_offset": 0, 00:12:40.571 "data_size": 63488 00:12:40.571 }, 00:12:40.571 { 00:12:40.571 "name": "BaseBdev2", 00:12:40.571 "uuid": "5beb29f1-9b49-569d-94cc-196cad3345d3", 00:12:40.571 "is_configured": true, 00:12:40.571 "data_offset": 2048, 00:12:40.571 "data_size": 63488 00:12:40.571 } 00:12:40.571 ] 00:12:40.571 }' 00:12:40.571 15:20:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:40.571 15:20:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:40.571 15:20:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:40.571 15:20:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:40.571 15:20:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:12:40.571 15:20:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.571 15:20:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:40.571 15:20:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.571 15:20:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:40.571 15:20:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.571 15:20:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:40.571 [2024-11-20 15:20:26.892778] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:40.571 [2024-11-20 15:20:26.892842] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:40.571 [2024-11-20 15:20:26.892872] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:12:40.571 [2024-11-20 15:20:26.892896] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:40.571 [2024-11-20 15:20:26.893350] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:40.571 [2024-11-20 15:20:26.893384] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:40.571 [2024-11-20 15:20:26.893476] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:12:40.571 [2024-11-20 15:20:26.893490] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:12:40.571 [2024-11-20 15:20:26.893504] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:12:40.571 [2024-11-20 15:20:26.893516] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:12:40.571 BaseBdev1 00:12:40.571 15:20:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.571 15:20:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:12:41.507 15:20:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:41.507 15:20:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:41.507 15:20:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:41.507 15:20:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:41.507 15:20:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:41.507 15:20:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:41.507 15:20:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:41.507 15:20:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:41.507 15:20:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:41.507 15:20:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:41.507 15:20:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:41.507 15:20:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.507 15:20:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:41.507 15:20:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:41.507 15:20:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.507 15:20:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:41.507 "name": "raid_bdev1", 00:12:41.507 "uuid": "e91c1882-c237-4fd5-9c1a-72b721e2f96f", 00:12:41.507 "strip_size_kb": 0, 00:12:41.507 "state": "online", 00:12:41.507 "raid_level": "raid1", 00:12:41.507 "superblock": true, 00:12:41.507 "num_base_bdevs": 2, 00:12:41.507 "num_base_bdevs_discovered": 1, 00:12:41.507 "num_base_bdevs_operational": 1, 00:12:41.507 "base_bdevs_list": [ 00:12:41.507 { 00:12:41.507 "name": null, 00:12:41.507 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:41.507 "is_configured": false, 00:12:41.507 "data_offset": 0, 00:12:41.507 "data_size": 63488 00:12:41.507 }, 00:12:41.507 { 00:12:41.507 "name": "BaseBdev2", 00:12:41.507 "uuid": "5beb29f1-9b49-569d-94cc-196cad3345d3", 00:12:41.507 "is_configured": true, 00:12:41.507 "data_offset": 2048, 00:12:41.507 "data_size": 63488 00:12:41.507 } 00:12:41.507 ] 00:12:41.507 }' 00:12:41.507 15:20:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:41.507 15:20:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:42.074 15:20:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:42.074 15:20:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:42.074 15:20:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:42.074 15:20:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:42.074 15:20:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:42.074 15:20:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:42.075 15:20:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:42.075 15:20:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.075 15:20:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:42.075 15:20:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.075 15:20:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:42.075 "name": "raid_bdev1", 00:12:42.075 "uuid": "e91c1882-c237-4fd5-9c1a-72b721e2f96f", 00:12:42.075 "strip_size_kb": 0, 00:12:42.075 "state": "online", 00:12:42.075 "raid_level": "raid1", 00:12:42.075 "superblock": true, 00:12:42.075 "num_base_bdevs": 2, 00:12:42.075 "num_base_bdevs_discovered": 1, 00:12:42.075 "num_base_bdevs_operational": 1, 00:12:42.075 "base_bdevs_list": [ 00:12:42.075 { 00:12:42.075 "name": null, 00:12:42.075 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:42.075 "is_configured": false, 00:12:42.075 "data_offset": 0, 00:12:42.075 "data_size": 63488 00:12:42.075 }, 00:12:42.075 { 00:12:42.075 "name": "BaseBdev2", 00:12:42.075 "uuid": "5beb29f1-9b49-569d-94cc-196cad3345d3", 00:12:42.075 "is_configured": true, 00:12:42.075 "data_offset": 2048, 00:12:42.075 "data_size": 63488 00:12:42.075 } 00:12:42.075 ] 00:12:42.075 }' 00:12:42.075 15:20:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:42.075 15:20:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:42.075 15:20:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:42.075 15:20:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:42.075 15:20:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:42.075 15:20:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:12:42.075 15:20:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:42.075 15:20:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:12:42.075 15:20:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:42.075 15:20:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:12:42.075 15:20:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:42.075 15:20:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:42.075 15:20:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.075 15:20:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:42.075 [2024-11-20 15:20:28.467620] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:42.075 [2024-11-20 15:20:28.467804] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:12:42.075 [2024-11-20 15:20:28.467827] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:12:42.075 request: 00:12:42.075 { 00:12:42.075 "base_bdev": "BaseBdev1", 00:12:42.075 "raid_bdev": "raid_bdev1", 00:12:42.075 "method": "bdev_raid_add_base_bdev", 00:12:42.075 "req_id": 1 00:12:42.075 } 00:12:42.075 Got JSON-RPC error response 00:12:42.075 response: 00:12:42.075 { 00:12:42.075 "code": -22, 00:12:42.075 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:12:42.075 } 00:12:42.075 15:20:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:12:42.075 15:20:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:12:42.075 15:20:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:42.075 15:20:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:42.075 15:20:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:42.075 15:20:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:12:43.010 15:20:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:43.010 15:20:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:43.010 15:20:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:43.010 15:20:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:43.010 15:20:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:43.011 15:20:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:43.011 15:20:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:43.011 15:20:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:43.011 15:20:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:43.011 15:20:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:43.011 15:20:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:43.011 15:20:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.011 15:20:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:43.011 15:20:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:43.269 15:20:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.269 15:20:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:43.269 "name": "raid_bdev1", 00:12:43.269 "uuid": "e91c1882-c237-4fd5-9c1a-72b721e2f96f", 00:12:43.269 "strip_size_kb": 0, 00:12:43.269 "state": "online", 00:12:43.269 "raid_level": "raid1", 00:12:43.269 "superblock": true, 00:12:43.269 "num_base_bdevs": 2, 00:12:43.269 "num_base_bdevs_discovered": 1, 00:12:43.269 "num_base_bdevs_operational": 1, 00:12:43.269 "base_bdevs_list": [ 00:12:43.269 { 00:12:43.269 "name": null, 00:12:43.269 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:43.269 "is_configured": false, 00:12:43.269 "data_offset": 0, 00:12:43.269 "data_size": 63488 00:12:43.269 }, 00:12:43.269 { 00:12:43.269 "name": "BaseBdev2", 00:12:43.269 "uuid": "5beb29f1-9b49-569d-94cc-196cad3345d3", 00:12:43.269 "is_configured": true, 00:12:43.269 "data_offset": 2048, 00:12:43.269 "data_size": 63488 00:12:43.269 } 00:12:43.269 ] 00:12:43.269 }' 00:12:43.269 15:20:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:43.269 15:20:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:43.528 15:20:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:43.528 15:20:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:43.528 15:20:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:43.528 15:20:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:43.528 15:20:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:43.528 15:20:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:43.528 15:20:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:43.528 15:20:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.528 15:20:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:43.528 15:20:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.528 15:20:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:43.528 "name": "raid_bdev1", 00:12:43.528 "uuid": "e91c1882-c237-4fd5-9c1a-72b721e2f96f", 00:12:43.528 "strip_size_kb": 0, 00:12:43.528 "state": "online", 00:12:43.528 "raid_level": "raid1", 00:12:43.528 "superblock": true, 00:12:43.528 "num_base_bdevs": 2, 00:12:43.528 "num_base_bdevs_discovered": 1, 00:12:43.528 "num_base_bdevs_operational": 1, 00:12:43.528 "base_bdevs_list": [ 00:12:43.528 { 00:12:43.528 "name": null, 00:12:43.528 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:43.528 "is_configured": false, 00:12:43.528 "data_offset": 0, 00:12:43.528 "data_size": 63488 00:12:43.528 }, 00:12:43.528 { 00:12:43.528 "name": "BaseBdev2", 00:12:43.528 "uuid": "5beb29f1-9b49-569d-94cc-196cad3345d3", 00:12:43.528 "is_configured": true, 00:12:43.528 "data_offset": 2048, 00:12:43.528 "data_size": 63488 00:12:43.528 } 00:12:43.528 ] 00:12:43.528 }' 00:12:43.528 15:20:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:43.528 15:20:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:43.528 15:20:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:43.528 15:20:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:43.528 15:20:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 75531 00:12:43.528 15:20:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 75531 ']' 00:12:43.528 15:20:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 75531 00:12:43.528 15:20:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:12:43.528 15:20:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:43.528 15:20:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75531 00:12:43.786 15:20:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:43.786 15:20:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:43.786 killing process with pid 75531 00:12:43.786 15:20:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75531' 00:12:43.786 Received shutdown signal, test time was about 60.000000 seconds 00:12:43.786 00:12:43.786 Latency(us) 00:12:43.786 [2024-11-20T15:20:30.268Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:43.786 [2024-11-20T15:20:30.268Z] =================================================================================================================== 00:12:43.786 [2024-11-20T15:20:30.268Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:12:43.786 15:20:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 75531 00:12:43.786 [2024-11-20 15:20:30.016424] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:43.786 15:20:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 75531 00:12:43.786 [2024-11-20 15:20:30.016546] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:43.786 [2024-11-20 15:20:30.016595] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:43.786 [2024-11-20 15:20:30.016609] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:12:44.083 [2024-11-20 15:20:30.320724] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:45.038 15:20:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:12:45.038 00:12:45.038 real 0m23.408s 00:12:45.038 user 0m27.721s 00:12:45.038 sys 0m4.306s 00:12:45.038 15:20:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:45.038 15:20:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:45.038 ************************************ 00:12:45.038 END TEST raid_rebuild_test_sb 00:12:45.038 ************************************ 00:12:45.038 15:20:31 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 2 false true true 00:12:45.038 15:20:31 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:12:45.038 15:20:31 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:45.038 15:20:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:45.299 ************************************ 00:12:45.299 START TEST raid_rebuild_test_io 00:12:45.299 ************************************ 00:12:45.299 15:20:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 false true true 00:12:45.299 15:20:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:12:45.299 15:20:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:12:45.299 15:20:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:12:45.299 15:20:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:12:45.299 15:20:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:12:45.299 15:20:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:12:45.299 15:20:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:45.299 15:20:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:12:45.299 15:20:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:45.299 15:20:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:45.299 15:20:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:12:45.299 15:20:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:45.299 15:20:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:45.299 15:20:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:12:45.299 15:20:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:12:45.299 15:20:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:12:45.299 15:20:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:12:45.299 15:20:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:12:45.299 15:20:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:12:45.299 15:20:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:12:45.299 15:20:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:12:45.299 15:20:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:12:45.299 15:20:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:12:45.299 15:20:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=76263 00:12:45.299 15:20:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 76263 00:12:45.299 15:20:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:12:45.299 15:20:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # '[' -z 76263 ']' 00:12:45.299 15:20:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:45.299 15:20:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:45.299 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:45.299 15:20:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:45.299 15:20:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:45.299 15:20:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:45.299 [2024-11-20 15:20:31.632550] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:12:45.299 [2024-11-20 15:20:31.632679] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76263 ] 00:12:45.299 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:45.299 Zero copy mechanism will not be used. 00:12:45.558 [2024-11-20 15:20:31.794107] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:45.558 [2024-11-20 15:20:31.910115] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:45.817 [2024-11-20 15:20:32.107162] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:45.817 [2024-11-20 15:20:32.107231] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:46.076 15:20:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:46.076 15:20:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # return 0 00:12:46.076 15:20:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:46.076 15:20:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:46.076 15:20:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.076 15:20:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:46.076 BaseBdev1_malloc 00:12:46.076 15:20:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.076 15:20:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:46.076 15:20:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.076 15:20:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:46.076 [2024-11-20 15:20:32.534610] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:46.076 [2024-11-20 15:20:32.534687] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:46.076 [2024-11-20 15:20:32.534712] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:46.076 [2024-11-20 15:20:32.534734] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:46.076 [2024-11-20 15:20:32.537104] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:46.076 [2024-11-20 15:20:32.537148] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:46.076 BaseBdev1 00:12:46.076 15:20:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.076 15:20:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:46.076 15:20:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:46.076 15:20:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.076 15:20:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:46.334 BaseBdev2_malloc 00:12:46.334 15:20:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.334 15:20:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:12:46.334 15:20:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.334 15:20:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:46.334 [2024-11-20 15:20:32.588604] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:12:46.334 [2024-11-20 15:20:32.588684] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:46.334 [2024-11-20 15:20:32.588713] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:46.334 [2024-11-20 15:20:32.588727] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:46.334 [2024-11-20 15:20:32.591121] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:46.334 [2024-11-20 15:20:32.591169] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:46.334 BaseBdev2 00:12:46.334 15:20:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.334 15:20:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:12:46.334 15:20:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.334 15:20:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:46.334 spare_malloc 00:12:46.334 15:20:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.334 15:20:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:12:46.334 15:20:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.334 15:20:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:46.334 spare_delay 00:12:46.334 15:20:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.334 15:20:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:46.334 15:20:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.334 15:20:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:46.334 [2024-11-20 15:20:32.671117] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:46.334 [2024-11-20 15:20:32.671186] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:46.334 [2024-11-20 15:20:32.671209] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:12:46.334 [2024-11-20 15:20:32.671224] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:46.334 [2024-11-20 15:20:32.673667] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:46.334 [2024-11-20 15:20:32.673709] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:46.334 spare 00:12:46.334 15:20:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.334 15:20:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:12:46.334 15:20:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.334 15:20:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:46.334 [2024-11-20 15:20:32.683143] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:46.334 [2024-11-20 15:20:32.685205] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:46.334 [2024-11-20 15:20:32.685308] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:46.335 [2024-11-20 15:20:32.685324] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:12:46.335 [2024-11-20 15:20:32.685591] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:46.335 [2024-11-20 15:20:32.685764] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:46.335 [2024-11-20 15:20:32.685785] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:46.335 [2024-11-20 15:20:32.685941] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:46.335 15:20:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.335 15:20:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:46.335 15:20:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:46.335 15:20:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:46.335 15:20:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:46.335 15:20:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:46.335 15:20:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:46.335 15:20:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:46.335 15:20:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:46.335 15:20:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:46.335 15:20:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:46.335 15:20:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:46.335 15:20:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:46.335 15:20:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.335 15:20:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:46.335 15:20:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.335 15:20:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:46.335 "name": "raid_bdev1", 00:12:46.335 "uuid": "e79d1810-425a-421b-a8f6-31d6155f0fda", 00:12:46.335 "strip_size_kb": 0, 00:12:46.335 "state": "online", 00:12:46.335 "raid_level": "raid1", 00:12:46.335 "superblock": false, 00:12:46.335 "num_base_bdevs": 2, 00:12:46.335 "num_base_bdevs_discovered": 2, 00:12:46.335 "num_base_bdevs_operational": 2, 00:12:46.335 "base_bdevs_list": [ 00:12:46.335 { 00:12:46.335 "name": "BaseBdev1", 00:12:46.335 "uuid": "929eccd4-72cd-5eae-9bba-53ecc06d7a88", 00:12:46.335 "is_configured": true, 00:12:46.335 "data_offset": 0, 00:12:46.335 "data_size": 65536 00:12:46.335 }, 00:12:46.335 { 00:12:46.335 "name": "BaseBdev2", 00:12:46.335 "uuid": "823ed872-c9f9-5f3b-831f-eb7694bb92a6", 00:12:46.335 "is_configured": true, 00:12:46.335 "data_offset": 0, 00:12:46.335 "data_size": 65536 00:12:46.335 } 00:12:46.335 ] 00:12:46.335 }' 00:12:46.335 15:20:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:46.335 15:20:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:46.904 15:20:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:12:46.904 15:20:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:46.904 15:20:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.904 15:20:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:46.904 [2024-11-20 15:20:33.103299] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:46.904 15:20:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.904 15:20:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:12:46.904 15:20:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:46.904 15:20:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.904 15:20:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:46.904 15:20:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:12:46.904 15:20:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.904 15:20:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:12:46.904 15:20:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:12:46.904 15:20:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:12:46.904 15:20:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.904 15:20:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:46.904 15:20:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:46.904 [2024-11-20 15:20:33.186883] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:46.904 15:20:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.904 15:20:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:46.904 15:20:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:46.904 15:20:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:46.904 15:20:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:46.904 15:20:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:46.904 15:20:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:46.904 15:20:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:46.904 15:20:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:46.904 15:20:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:46.904 15:20:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:46.904 15:20:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:46.904 15:20:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:46.904 15:20:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.904 15:20:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:46.904 15:20:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.904 15:20:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:46.904 "name": "raid_bdev1", 00:12:46.904 "uuid": "e79d1810-425a-421b-a8f6-31d6155f0fda", 00:12:46.904 "strip_size_kb": 0, 00:12:46.904 "state": "online", 00:12:46.904 "raid_level": "raid1", 00:12:46.904 "superblock": false, 00:12:46.904 "num_base_bdevs": 2, 00:12:46.904 "num_base_bdevs_discovered": 1, 00:12:46.904 "num_base_bdevs_operational": 1, 00:12:46.904 "base_bdevs_list": [ 00:12:46.904 { 00:12:46.904 "name": null, 00:12:46.904 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:46.904 "is_configured": false, 00:12:46.904 "data_offset": 0, 00:12:46.904 "data_size": 65536 00:12:46.904 }, 00:12:46.904 { 00:12:46.904 "name": "BaseBdev2", 00:12:46.904 "uuid": "823ed872-c9f9-5f3b-831f-eb7694bb92a6", 00:12:46.904 "is_configured": true, 00:12:46.904 "data_offset": 0, 00:12:46.904 "data_size": 65536 00:12:46.904 } 00:12:46.904 ] 00:12:46.904 }' 00:12:46.904 15:20:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:46.904 15:20:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:46.904 [2024-11-20 15:20:33.286818] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:12:46.904 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:46.904 Zero copy mechanism will not be used. 00:12:46.904 Running I/O for 60 seconds... 00:12:47.163 15:20:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:47.163 15:20:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.163 15:20:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:47.163 [2024-11-20 15:20:33.578090] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:47.163 15:20:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.163 15:20:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:12:47.163 [2024-11-20 15:20:33.632869] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:12:47.163 [2024-11-20 15:20:33.635040] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:47.421 [2024-11-20 15:20:33.743166] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:47.421 [2024-11-20 15:20:33.743741] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:47.679 [2024-11-20 15:20:33.958643] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:47.679 [2024-11-20 15:20:33.958986] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:47.939 189.00 IOPS, 567.00 MiB/s [2024-11-20T15:20:34.421Z] [2024-11-20 15:20:34.299695] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:12:47.939 [2024-11-20 15:20:34.300303] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:12:48.197 [2024-11-20 15:20:34.509963] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:48.197 15:20:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:48.197 15:20:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:48.197 15:20:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:48.197 15:20:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:48.197 15:20:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:48.197 15:20:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:48.197 15:20:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.197 15:20:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:48.197 15:20:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:48.197 15:20:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.456 15:20:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:48.456 "name": "raid_bdev1", 00:12:48.456 "uuid": "e79d1810-425a-421b-a8f6-31d6155f0fda", 00:12:48.456 "strip_size_kb": 0, 00:12:48.456 "state": "online", 00:12:48.456 "raid_level": "raid1", 00:12:48.456 "superblock": false, 00:12:48.456 "num_base_bdevs": 2, 00:12:48.456 "num_base_bdevs_discovered": 2, 00:12:48.456 "num_base_bdevs_operational": 2, 00:12:48.456 "process": { 00:12:48.456 "type": "rebuild", 00:12:48.456 "target": "spare", 00:12:48.456 "progress": { 00:12:48.456 "blocks": 12288, 00:12:48.456 "percent": 18 00:12:48.456 } 00:12:48.456 }, 00:12:48.456 "base_bdevs_list": [ 00:12:48.456 { 00:12:48.456 "name": "spare", 00:12:48.456 "uuid": "8b6714c1-c3a8-5d8b-bb76-f857708d2ac9", 00:12:48.456 "is_configured": true, 00:12:48.456 "data_offset": 0, 00:12:48.456 "data_size": 65536 00:12:48.456 }, 00:12:48.456 { 00:12:48.456 "name": "BaseBdev2", 00:12:48.456 "uuid": "823ed872-c9f9-5f3b-831f-eb7694bb92a6", 00:12:48.456 "is_configured": true, 00:12:48.456 "data_offset": 0, 00:12:48.456 "data_size": 65536 00:12:48.456 } 00:12:48.456 ] 00:12:48.456 }' 00:12:48.456 15:20:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:48.456 15:20:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:48.456 15:20:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:48.456 15:20:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:48.456 15:20:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:48.456 15:20:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.456 15:20:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:48.456 [2024-11-20 15:20:34.778637] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:48.456 [2024-11-20 15:20:34.826317] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:12:48.456 [2024-11-20 15:20:34.826626] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:12:48.456 [2024-11-20 15:20:34.839211] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:48.456 [2024-11-20 15:20:34.847644] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:48.456 [2024-11-20 15:20:34.847711] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:48.456 [2024-11-20 15:20:34.847727] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:48.456 [2024-11-20 15:20:34.897351] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:12:48.456 15:20:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.456 15:20:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:48.456 15:20:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:48.456 15:20:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:48.456 15:20:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:48.456 15:20:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:48.456 15:20:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:48.456 15:20:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:48.456 15:20:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:48.456 15:20:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:48.456 15:20:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:48.456 15:20:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:48.456 15:20:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.456 15:20:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:48.456 15:20:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:48.714 15:20:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.714 15:20:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:48.714 "name": "raid_bdev1", 00:12:48.714 "uuid": "e79d1810-425a-421b-a8f6-31d6155f0fda", 00:12:48.714 "strip_size_kb": 0, 00:12:48.714 "state": "online", 00:12:48.714 "raid_level": "raid1", 00:12:48.714 "superblock": false, 00:12:48.714 "num_base_bdevs": 2, 00:12:48.714 "num_base_bdevs_discovered": 1, 00:12:48.714 "num_base_bdevs_operational": 1, 00:12:48.714 "base_bdevs_list": [ 00:12:48.714 { 00:12:48.714 "name": null, 00:12:48.714 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:48.714 "is_configured": false, 00:12:48.714 "data_offset": 0, 00:12:48.714 "data_size": 65536 00:12:48.714 }, 00:12:48.714 { 00:12:48.714 "name": "BaseBdev2", 00:12:48.714 "uuid": "823ed872-c9f9-5f3b-831f-eb7694bb92a6", 00:12:48.714 "is_configured": true, 00:12:48.714 "data_offset": 0, 00:12:48.714 "data_size": 65536 00:12:48.714 } 00:12:48.714 ] 00:12:48.714 }' 00:12:48.714 15:20:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:48.714 15:20:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:48.973 186.50 IOPS, 559.50 MiB/s [2024-11-20T15:20:35.455Z] 15:20:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:48.973 15:20:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:48.973 15:20:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:48.973 15:20:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:48.973 15:20:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:48.973 15:20:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:48.973 15:20:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:48.973 15:20:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.973 15:20:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:48.973 15:20:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.973 15:20:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:48.973 "name": "raid_bdev1", 00:12:48.973 "uuid": "e79d1810-425a-421b-a8f6-31d6155f0fda", 00:12:48.973 "strip_size_kb": 0, 00:12:48.973 "state": "online", 00:12:48.973 "raid_level": "raid1", 00:12:48.973 "superblock": false, 00:12:48.973 "num_base_bdevs": 2, 00:12:48.973 "num_base_bdevs_discovered": 1, 00:12:48.973 "num_base_bdevs_operational": 1, 00:12:48.973 "base_bdevs_list": [ 00:12:48.973 { 00:12:48.973 "name": null, 00:12:48.973 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:48.973 "is_configured": false, 00:12:48.973 "data_offset": 0, 00:12:48.973 "data_size": 65536 00:12:48.973 }, 00:12:48.973 { 00:12:48.973 "name": "BaseBdev2", 00:12:48.973 "uuid": "823ed872-c9f9-5f3b-831f-eb7694bb92a6", 00:12:48.973 "is_configured": true, 00:12:48.973 "data_offset": 0, 00:12:48.973 "data_size": 65536 00:12:48.973 } 00:12:48.973 ] 00:12:48.973 }' 00:12:48.973 15:20:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:48.973 15:20:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:48.973 15:20:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:49.234 15:20:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:49.234 15:20:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:49.234 15:20:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.234 15:20:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:49.234 [2024-11-20 15:20:35.482741] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:49.234 15:20:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.234 15:20:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:12:49.234 [2024-11-20 15:20:35.534361] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:12:49.234 [2024-11-20 15:20:35.536696] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:49.234 [2024-11-20 15:20:35.643964] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:49.234 [2024-11-20 15:20:35.644721] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:49.492 [2024-11-20 15:20:35.764943] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:49.492 [2024-11-20 15:20:35.765444] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:49.750 [2024-11-20 15:20:36.098265] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:12:50.008 180.33 IOPS, 541.00 MiB/s [2024-11-20T15:20:36.491Z] [2024-11-20 15:20:36.320154] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:50.268 15:20:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:50.268 15:20:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:50.268 15:20:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:50.268 15:20:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:50.268 15:20:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:50.268 15:20:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:50.268 15:20:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:50.269 15:20:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.269 15:20:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:50.269 [2024-11-20 15:20:36.543696] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:12:50.269 [2024-11-20 15:20:36.544172] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:12:50.269 15:20:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.269 15:20:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:50.269 "name": "raid_bdev1", 00:12:50.269 "uuid": "e79d1810-425a-421b-a8f6-31d6155f0fda", 00:12:50.269 "strip_size_kb": 0, 00:12:50.269 "state": "online", 00:12:50.269 "raid_level": "raid1", 00:12:50.269 "superblock": false, 00:12:50.269 "num_base_bdevs": 2, 00:12:50.269 "num_base_bdevs_discovered": 2, 00:12:50.269 "num_base_bdevs_operational": 2, 00:12:50.269 "process": { 00:12:50.269 "type": "rebuild", 00:12:50.269 "target": "spare", 00:12:50.269 "progress": { 00:12:50.269 "blocks": 14336, 00:12:50.269 "percent": 21 00:12:50.269 } 00:12:50.269 }, 00:12:50.269 "base_bdevs_list": [ 00:12:50.269 { 00:12:50.269 "name": "spare", 00:12:50.269 "uuid": "8b6714c1-c3a8-5d8b-bb76-f857708d2ac9", 00:12:50.269 "is_configured": true, 00:12:50.269 "data_offset": 0, 00:12:50.269 "data_size": 65536 00:12:50.269 }, 00:12:50.269 { 00:12:50.269 "name": "BaseBdev2", 00:12:50.269 "uuid": "823ed872-c9f9-5f3b-831f-eb7694bb92a6", 00:12:50.269 "is_configured": true, 00:12:50.269 "data_offset": 0, 00:12:50.269 "data_size": 65536 00:12:50.269 } 00:12:50.269 ] 00:12:50.269 }' 00:12:50.269 15:20:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:50.269 15:20:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:50.269 15:20:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:50.269 15:20:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:50.269 15:20:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:12:50.269 15:20:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:12:50.269 15:20:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:12:50.269 15:20:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:12:50.269 15:20:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=400 00:12:50.269 15:20:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:50.269 15:20:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:50.269 15:20:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:50.269 15:20:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:50.269 15:20:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:50.269 15:20:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:50.269 15:20:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:50.269 15:20:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:50.269 15:20:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.269 15:20:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:50.269 15:20:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.269 15:20:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:50.269 "name": "raid_bdev1", 00:12:50.269 "uuid": "e79d1810-425a-421b-a8f6-31d6155f0fda", 00:12:50.269 "strip_size_kb": 0, 00:12:50.269 "state": "online", 00:12:50.269 "raid_level": "raid1", 00:12:50.269 "superblock": false, 00:12:50.269 "num_base_bdevs": 2, 00:12:50.269 "num_base_bdevs_discovered": 2, 00:12:50.269 "num_base_bdevs_operational": 2, 00:12:50.269 "process": { 00:12:50.269 "type": "rebuild", 00:12:50.269 "target": "spare", 00:12:50.269 "progress": { 00:12:50.269 "blocks": 14336, 00:12:50.269 "percent": 21 00:12:50.269 } 00:12:50.269 }, 00:12:50.269 "base_bdevs_list": [ 00:12:50.269 { 00:12:50.269 "name": "spare", 00:12:50.269 "uuid": "8b6714c1-c3a8-5d8b-bb76-f857708d2ac9", 00:12:50.269 "is_configured": true, 00:12:50.269 "data_offset": 0, 00:12:50.269 "data_size": 65536 00:12:50.269 }, 00:12:50.269 { 00:12:50.269 "name": "BaseBdev2", 00:12:50.269 "uuid": "823ed872-c9f9-5f3b-831f-eb7694bb92a6", 00:12:50.269 "is_configured": true, 00:12:50.269 "data_offset": 0, 00:12:50.269 "data_size": 65536 00:12:50.269 } 00:12:50.269 ] 00:12:50.269 }' 00:12:50.269 15:20:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:50.529 [2024-11-20 15:20:36.765545] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:12:50.529 [2024-11-20 15:20:36.766105] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:12:50.529 15:20:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:50.529 15:20:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:50.529 15:20:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:50.529 15:20:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:50.788 [2024-11-20 15:20:37.132434] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:12:51.306 155.00 IOPS, 465.00 MiB/s [2024-11-20T15:20:37.788Z] [2024-11-20 15:20:37.620245] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:12:51.565 15:20:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:51.565 15:20:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:51.565 15:20:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:51.565 15:20:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:51.566 15:20:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:51.566 15:20:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:51.566 15:20:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:51.566 15:20:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:51.566 15:20:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.566 15:20:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:51.566 [2024-11-20 15:20:37.856344] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:12:51.566 15:20:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.566 15:20:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:51.566 "name": "raid_bdev1", 00:12:51.566 "uuid": "e79d1810-425a-421b-a8f6-31d6155f0fda", 00:12:51.566 "strip_size_kb": 0, 00:12:51.566 "state": "online", 00:12:51.566 "raid_level": "raid1", 00:12:51.566 "superblock": false, 00:12:51.566 "num_base_bdevs": 2, 00:12:51.566 "num_base_bdevs_discovered": 2, 00:12:51.566 "num_base_bdevs_operational": 2, 00:12:51.566 "process": { 00:12:51.566 "type": "rebuild", 00:12:51.566 "target": "spare", 00:12:51.566 "progress": { 00:12:51.566 "blocks": 30720, 00:12:51.566 "percent": 46 00:12:51.566 } 00:12:51.566 }, 00:12:51.566 "base_bdevs_list": [ 00:12:51.566 { 00:12:51.566 "name": "spare", 00:12:51.566 "uuid": "8b6714c1-c3a8-5d8b-bb76-f857708d2ac9", 00:12:51.566 "is_configured": true, 00:12:51.566 "data_offset": 0, 00:12:51.566 "data_size": 65536 00:12:51.566 }, 00:12:51.566 { 00:12:51.566 "name": "BaseBdev2", 00:12:51.566 "uuid": "823ed872-c9f9-5f3b-831f-eb7694bb92a6", 00:12:51.566 "is_configured": true, 00:12:51.566 "data_offset": 0, 00:12:51.566 "data_size": 65536 00:12:51.566 } 00:12:51.566 ] 00:12:51.566 }' 00:12:51.566 15:20:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:51.566 15:20:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:51.566 15:20:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:51.566 [2024-11-20 15:20:37.963453] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:12:51.566 15:20:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:51.566 15:20:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:51.826 [2024-11-20 15:20:38.186863] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:12:52.090 135.00 IOPS, 405.00 MiB/s [2024-11-20T15:20:38.572Z] [2024-11-20 15:20:38.529453] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:12:52.090 [2024-11-20 15:20:38.530024] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:12:52.349 [2024-11-20 15:20:38.751865] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:12:52.608 15:20:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:52.608 15:20:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:52.608 15:20:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:52.608 15:20:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:52.609 15:20:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:52.609 15:20:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:52.609 15:20:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:52.609 15:20:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.609 15:20:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:52.609 15:20:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:52.609 15:20:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.609 15:20:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:52.609 "name": "raid_bdev1", 00:12:52.609 "uuid": "e79d1810-425a-421b-a8f6-31d6155f0fda", 00:12:52.609 "strip_size_kb": 0, 00:12:52.609 "state": "online", 00:12:52.609 "raid_level": "raid1", 00:12:52.609 "superblock": false, 00:12:52.609 "num_base_bdevs": 2, 00:12:52.609 "num_base_bdevs_discovered": 2, 00:12:52.609 "num_base_bdevs_operational": 2, 00:12:52.609 "process": { 00:12:52.609 "type": "rebuild", 00:12:52.609 "target": "spare", 00:12:52.609 "progress": { 00:12:52.609 "blocks": 49152, 00:12:52.609 "percent": 75 00:12:52.609 } 00:12:52.609 }, 00:12:52.609 "base_bdevs_list": [ 00:12:52.609 { 00:12:52.609 "name": "spare", 00:12:52.609 "uuid": "8b6714c1-c3a8-5d8b-bb76-f857708d2ac9", 00:12:52.609 "is_configured": true, 00:12:52.609 "data_offset": 0, 00:12:52.609 "data_size": 65536 00:12:52.609 }, 00:12:52.609 { 00:12:52.609 "name": "BaseBdev2", 00:12:52.609 "uuid": "823ed872-c9f9-5f3b-831f-eb7694bb92a6", 00:12:52.609 "is_configured": true, 00:12:52.609 "data_offset": 0, 00:12:52.609 "data_size": 65536 00:12:52.609 } 00:12:52.609 ] 00:12:52.609 }' 00:12:52.609 15:20:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:52.867 15:20:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:52.867 15:20:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:52.867 15:20:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:52.867 15:20:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:53.126 117.17 IOPS, 351.50 MiB/s [2024-11-20T15:20:39.608Z] [2024-11-20 15:20:39.412492] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:12:53.384 [2024-11-20 15:20:39.858584] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:12:53.644 [2024-11-20 15:20:39.958408] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:12:53.644 [2024-11-20 15:20:39.960605] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:53.914 15:20:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:53.914 15:20:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:53.914 15:20:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:53.914 15:20:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:53.914 15:20:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:53.914 15:20:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:53.914 15:20:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:53.914 15:20:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:53.914 15:20:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.914 15:20:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:53.914 15:20:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.914 15:20:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:53.914 "name": "raid_bdev1", 00:12:53.914 "uuid": "e79d1810-425a-421b-a8f6-31d6155f0fda", 00:12:53.914 "strip_size_kb": 0, 00:12:53.914 "state": "online", 00:12:53.914 "raid_level": "raid1", 00:12:53.914 "superblock": false, 00:12:53.914 "num_base_bdevs": 2, 00:12:53.914 "num_base_bdevs_discovered": 2, 00:12:53.914 "num_base_bdevs_operational": 2, 00:12:53.914 "base_bdevs_list": [ 00:12:53.914 { 00:12:53.914 "name": "spare", 00:12:53.914 "uuid": "8b6714c1-c3a8-5d8b-bb76-f857708d2ac9", 00:12:53.914 "is_configured": true, 00:12:53.914 "data_offset": 0, 00:12:53.914 "data_size": 65536 00:12:53.914 }, 00:12:53.914 { 00:12:53.914 "name": "BaseBdev2", 00:12:53.914 "uuid": "823ed872-c9f9-5f3b-831f-eb7694bb92a6", 00:12:53.914 "is_configured": true, 00:12:53.914 "data_offset": 0, 00:12:53.914 "data_size": 65536 00:12:53.914 } 00:12:53.914 ] 00:12:53.914 }' 00:12:53.914 15:20:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:53.915 15:20:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:12:53.915 15:20:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:53.915 15:20:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:12:53.915 15:20:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:12:53.915 15:20:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:53.915 15:20:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:53.915 15:20:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:53.915 15:20:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:53.915 15:20:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:53.915 15:20:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:53.915 15:20:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:53.915 15:20:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.915 15:20:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:53.915 106.29 IOPS, 318.86 MiB/s [2024-11-20T15:20:40.397Z] 15:20:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.915 15:20:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:53.915 "name": "raid_bdev1", 00:12:53.915 "uuid": "e79d1810-425a-421b-a8f6-31d6155f0fda", 00:12:53.915 "strip_size_kb": 0, 00:12:53.915 "state": "online", 00:12:53.915 "raid_level": "raid1", 00:12:53.915 "superblock": false, 00:12:53.915 "num_base_bdevs": 2, 00:12:53.915 "num_base_bdevs_discovered": 2, 00:12:53.915 "num_base_bdevs_operational": 2, 00:12:53.915 "base_bdevs_list": [ 00:12:53.915 { 00:12:53.915 "name": "spare", 00:12:53.915 "uuid": "8b6714c1-c3a8-5d8b-bb76-f857708d2ac9", 00:12:53.915 "is_configured": true, 00:12:53.915 "data_offset": 0, 00:12:53.915 "data_size": 65536 00:12:53.915 }, 00:12:53.915 { 00:12:53.915 "name": "BaseBdev2", 00:12:53.915 "uuid": "823ed872-c9f9-5f3b-831f-eb7694bb92a6", 00:12:53.915 "is_configured": true, 00:12:53.915 "data_offset": 0, 00:12:53.915 "data_size": 65536 00:12:53.915 } 00:12:53.915 ] 00:12:53.915 }' 00:12:53.915 15:20:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:53.915 15:20:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:53.915 15:20:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:54.190 15:20:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:54.190 15:20:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:54.190 15:20:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:54.190 15:20:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:54.190 15:20:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:54.190 15:20:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:54.190 15:20:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:54.190 15:20:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:54.190 15:20:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:54.190 15:20:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:54.190 15:20:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:54.190 15:20:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:54.190 15:20:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:54.190 15:20:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.190 15:20:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:54.190 15:20:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.190 15:20:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:54.190 "name": "raid_bdev1", 00:12:54.190 "uuid": "e79d1810-425a-421b-a8f6-31d6155f0fda", 00:12:54.190 "strip_size_kb": 0, 00:12:54.190 "state": "online", 00:12:54.190 "raid_level": "raid1", 00:12:54.190 "superblock": false, 00:12:54.190 "num_base_bdevs": 2, 00:12:54.190 "num_base_bdevs_discovered": 2, 00:12:54.190 "num_base_bdevs_operational": 2, 00:12:54.190 "base_bdevs_list": [ 00:12:54.190 { 00:12:54.190 "name": "spare", 00:12:54.190 "uuid": "8b6714c1-c3a8-5d8b-bb76-f857708d2ac9", 00:12:54.190 "is_configured": true, 00:12:54.190 "data_offset": 0, 00:12:54.190 "data_size": 65536 00:12:54.190 }, 00:12:54.190 { 00:12:54.190 "name": "BaseBdev2", 00:12:54.190 "uuid": "823ed872-c9f9-5f3b-831f-eb7694bb92a6", 00:12:54.190 "is_configured": true, 00:12:54.190 "data_offset": 0, 00:12:54.190 "data_size": 65536 00:12:54.190 } 00:12:54.190 ] 00:12:54.190 }' 00:12:54.190 15:20:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:54.190 15:20:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:54.449 15:20:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:54.449 15:20:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.449 15:20:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:54.449 [2024-11-20 15:20:40.880096] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:54.449 [2024-11-20 15:20:40.880125] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:54.707 00:12:54.708 Latency(us) 00:12:54.708 [2024-11-20T15:20:41.190Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:54.708 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:12:54.708 raid_bdev1 : 7.68 99.44 298.31 0.00 0.00 13323.00 291.16 108647.63 00:12:54.708 [2024-11-20T15:20:41.190Z] =================================================================================================================== 00:12:54.708 [2024-11-20T15:20:41.190Z] Total : 99.44 298.31 0.00 0.00 13323.00 291.16 108647.63 00:12:54.708 [2024-11-20 15:20:40.982415] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:54.708 [2024-11-20 15:20:40.982682] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:54.708 [2024-11-20 15:20:40.982831] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:54.708 [2024-11-20 15:20:40.983109] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:12:54.708 { 00:12:54.708 "results": [ 00:12:54.708 { 00:12:54.708 "job": "raid_bdev1", 00:12:54.708 "core_mask": "0x1", 00:12:54.708 "workload": "randrw", 00:12:54.708 "percentage": 50, 00:12:54.708 "status": "finished", 00:12:54.708 "queue_depth": 2, 00:12:54.708 "io_size": 3145728, 00:12:54.708 "runtime": 7.683349, 00:12:54.708 "iops": 99.4358059226517, 00:12:54.708 "mibps": 298.3074177679551, 00:12:54.708 "io_failed": 0, 00:12:54.708 "io_timeout": 0, 00:12:54.708 "avg_latency_us": 13322.997035261465, 00:12:54.708 "min_latency_us": 291.16144578313254, 00:12:54.708 "max_latency_us": 108647.63373493977 00:12:54.708 } 00:12:54.708 ], 00:12:54.708 "core_count": 1 00:12:54.708 } 00:12:54.708 15:20:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.708 15:20:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:54.708 15:20:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:12:54.708 15:20:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.708 15:20:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:54.708 15:20:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.708 15:20:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:12:54.708 15:20:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:12:54.708 15:20:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:12:54.708 15:20:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:12:54.708 15:20:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:54.708 15:20:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:12:54.708 15:20:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:54.708 15:20:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:12:54.708 15:20:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:54.708 15:20:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:12:54.708 15:20:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:54.708 15:20:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:54.708 15:20:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:12:54.969 /dev/nbd0 00:12:54.969 15:20:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:54.969 15:20:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:54.969 15:20:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:12:54.970 15:20:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:12:54.970 15:20:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:54.970 15:20:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:54.970 15:20:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:12:54.970 15:20:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:12:54.970 15:20:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:54.970 15:20:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:54.970 15:20:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:54.970 1+0 records in 00:12:54.970 1+0 records out 00:12:54.970 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000752458 s, 5.4 MB/s 00:12:54.970 15:20:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:54.970 15:20:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:12:54.970 15:20:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:54.970 15:20:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:54.970 15:20:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:12:54.970 15:20:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:54.970 15:20:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:54.970 15:20:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:12:54.970 15:20:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:12:54.970 15:20:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:12:54.970 15:20:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:54.970 15:20:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:12:54.970 15:20:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:54.970 15:20:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:12:54.970 15:20:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:54.970 15:20:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:12:54.970 15:20:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:54.970 15:20:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:54.970 15:20:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:12:55.231 /dev/nbd1 00:12:55.231 15:20:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:55.231 15:20:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:55.231 15:20:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:12:55.231 15:20:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:12:55.231 15:20:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:55.231 15:20:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:55.231 15:20:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:12:55.231 15:20:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:12:55.231 15:20:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:55.231 15:20:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:55.231 15:20:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:55.231 1+0 records in 00:12:55.231 1+0 records out 00:12:55.231 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00031571 s, 13.0 MB/s 00:12:55.231 15:20:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:55.231 15:20:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:12:55.231 15:20:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:55.231 15:20:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:55.231 15:20:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:12:55.231 15:20:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:55.231 15:20:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:55.231 15:20:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:12:55.490 15:20:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:12:55.490 15:20:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:55.490 15:20:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:12:55.490 15:20:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:55.490 15:20:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:12:55.490 15:20:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:55.490 15:20:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:12:55.749 15:20:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:55.749 15:20:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:55.749 15:20:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:55.749 15:20:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:55.749 15:20:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:55.749 15:20:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:55.749 15:20:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:12:55.749 15:20:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:12:55.749 15:20:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:12:55.749 15:20:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:55.749 15:20:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:55.749 15:20:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:55.749 15:20:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:12:55.749 15:20:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:55.749 15:20:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:56.008 15:20:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:56.008 15:20:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:56.008 15:20:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:56.008 15:20:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:56.008 15:20:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:56.008 15:20:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:56.008 15:20:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:12:56.008 15:20:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:12:56.008 15:20:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:12:56.008 15:20:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 76263 00:12:56.008 15:20:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # '[' -z 76263 ']' 00:12:56.008 15:20:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # kill -0 76263 00:12:56.008 15:20:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # uname 00:12:56.008 15:20:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:56.008 15:20:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76263 00:12:56.008 15:20:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:56.008 15:20:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:56.008 15:20:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76263' 00:12:56.008 killing process with pid 76263 00:12:56.008 15:20:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@973 -- # kill 76263 00:12:56.008 Received shutdown signal, test time was about 9.016946 seconds 00:12:56.008 00:12:56.008 Latency(us) 00:12:56.008 [2024-11-20T15:20:42.490Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:56.008 [2024-11-20T15:20:42.490Z] =================================================================================================================== 00:12:56.008 [2024-11-20T15:20:42.490Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:56.008 [2024-11-20 15:20:42.291493] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:56.008 15:20:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@978 -- # wait 76263 00:12:56.266 [2024-11-20 15:20:42.526831] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:57.645 15:20:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:12:57.645 00:12:57.645 real 0m12.198s 00:12:57.645 user 0m15.219s 00:12:57.645 sys 0m1.722s 00:12:57.645 15:20:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:57.645 15:20:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:57.645 ************************************ 00:12:57.645 END TEST raid_rebuild_test_io 00:12:57.645 ************************************ 00:12:57.645 15:20:43 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 2 true true true 00:12:57.645 15:20:43 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:12:57.645 15:20:43 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:57.645 15:20:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:57.645 ************************************ 00:12:57.645 START TEST raid_rebuild_test_sb_io 00:12:57.645 ************************************ 00:12:57.645 15:20:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true true true 00:12:57.645 15:20:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:12:57.645 15:20:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:12:57.645 15:20:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:12:57.645 15:20:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:12:57.645 15:20:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:12:57.645 15:20:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:12:57.645 15:20:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:57.645 15:20:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:12:57.645 15:20:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:57.645 15:20:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:57.645 15:20:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:12:57.645 15:20:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:57.645 15:20:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:57.645 15:20:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:12:57.645 15:20:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:12:57.645 15:20:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:12:57.645 15:20:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:12:57.645 15:20:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:12:57.645 15:20:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:12:57.645 15:20:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:12:57.645 15:20:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:12:57.645 15:20:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:12:57.645 15:20:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:12:57.645 15:20:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:12:57.645 15:20:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=76639 00:12:57.645 15:20:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 76639 00:12:57.645 15:20:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:12:57.645 15:20:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # '[' -z 76639 ']' 00:12:57.645 15:20:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:57.645 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:57.645 15:20:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:57.645 15:20:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:57.645 15:20:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:57.645 15:20:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:57.645 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:57.645 Zero copy mechanism will not be used. 00:12:57.645 [2024-11-20 15:20:43.911296] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:12:57.645 [2024-11-20 15:20:43.911424] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76639 ] 00:12:57.645 [2024-11-20 15:20:44.092847] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:57.904 [2024-11-20 15:20:44.209174] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:58.161 [2024-11-20 15:20:44.416456] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:58.161 [2024-11-20 15:20:44.416525] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:58.419 15:20:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:58.419 15:20:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # return 0 00:12:58.419 15:20:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:58.419 15:20:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:58.419 15:20:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.419 15:20:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:58.419 BaseBdev1_malloc 00:12:58.419 15:20:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.419 15:20:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:58.419 15:20:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.419 15:20:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:58.419 [2024-11-20 15:20:44.803498] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:58.419 [2024-11-20 15:20:44.803570] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:58.420 [2024-11-20 15:20:44.803593] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:58.420 [2024-11-20 15:20:44.803608] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:58.420 [2024-11-20 15:20:44.806041] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:58.420 [2024-11-20 15:20:44.806085] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:58.420 BaseBdev1 00:12:58.420 15:20:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.420 15:20:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:58.420 15:20:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:58.420 15:20:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.420 15:20:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:58.420 BaseBdev2_malloc 00:12:58.420 15:20:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.420 15:20:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:12:58.420 15:20:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.420 15:20:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:58.420 [2024-11-20 15:20:44.860809] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:12:58.420 [2024-11-20 15:20:44.860890] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:58.420 [2024-11-20 15:20:44.860917] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:58.420 [2024-11-20 15:20:44.860932] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:58.420 [2024-11-20 15:20:44.863391] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:58.420 [2024-11-20 15:20:44.863441] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:58.420 BaseBdev2 00:12:58.420 15:20:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.420 15:20:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:12:58.420 15:20:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.420 15:20:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:58.679 spare_malloc 00:12:58.679 15:20:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.679 15:20:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:12:58.679 15:20:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.679 15:20:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:58.679 spare_delay 00:12:58.679 15:20:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.679 15:20:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:58.679 15:20:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.679 15:20:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:58.679 [2024-11-20 15:20:44.941439] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:58.679 [2024-11-20 15:20:44.941504] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:58.679 [2024-11-20 15:20:44.941527] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:12:58.679 [2024-11-20 15:20:44.941541] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:58.679 [2024-11-20 15:20:44.943902] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:58.679 [2024-11-20 15:20:44.943945] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:58.679 spare 00:12:58.679 15:20:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.679 15:20:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:12:58.679 15:20:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.679 15:20:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:58.679 [2024-11-20 15:20:44.953481] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:58.679 [2024-11-20 15:20:44.955520] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:58.679 [2024-11-20 15:20:44.955703] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:58.679 [2024-11-20 15:20:44.955721] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:58.679 [2024-11-20 15:20:44.955988] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:58.679 [2024-11-20 15:20:44.956154] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:58.679 [2024-11-20 15:20:44.956164] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:58.679 [2024-11-20 15:20:44.956317] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:58.679 15:20:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.679 15:20:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:58.679 15:20:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:58.679 15:20:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:58.679 15:20:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:58.679 15:20:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:58.679 15:20:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:58.679 15:20:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:58.679 15:20:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:58.679 15:20:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:58.679 15:20:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:58.679 15:20:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:58.679 15:20:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.679 15:20:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:58.679 15:20:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:58.679 15:20:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.679 15:20:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:58.679 "name": "raid_bdev1", 00:12:58.679 "uuid": "81216027-e833-4773-bcf2-c0964189b51a", 00:12:58.679 "strip_size_kb": 0, 00:12:58.679 "state": "online", 00:12:58.679 "raid_level": "raid1", 00:12:58.679 "superblock": true, 00:12:58.679 "num_base_bdevs": 2, 00:12:58.679 "num_base_bdevs_discovered": 2, 00:12:58.679 "num_base_bdevs_operational": 2, 00:12:58.679 "base_bdevs_list": [ 00:12:58.679 { 00:12:58.679 "name": "BaseBdev1", 00:12:58.679 "uuid": "07ed3209-db92-5926-b5a7-c688f5e9340e", 00:12:58.679 "is_configured": true, 00:12:58.679 "data_offset": 2048, 00:12:58.679 "data_size": 63488 00:12:58.679 }, 00:12:58.679 { 00:12:58.679 "name": "BaseBdev2", 00:12:58.679 "uuid": "22a31d58-cfce-5ed2-a721-e11dea7e5cdf", 00:12:58.679 "is_configured": true, 00:12:58.679 "data_offset": 2048, 00:12:58.679 "data_size": 63488 00:12:58.679 } 00:12:58.679 ] 00:12:58.679 }' 00:12:58.679 15:20:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:58.679 15:20:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:58.939 15:20:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:58.939 15:20:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.939 15:20:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:12:58.939 15:20:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:58.939 [2024-11-20 15:20:45.365154] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:58.939 15:20:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.939 15:20:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:12:58.939 15:20:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:58.939 15:20:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:12:58.939 15:20:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.939 15:20:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:59.198 15:20:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.198 15:20:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:12:59.198 15:20:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:12:59.198 15:20:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:12:59.198 15:20:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:59.198 15:20:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.198 15:20:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:59.198 [2024-11-20 15:20:45.460750] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:59.198 15:20:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.198 15:20:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:59.198 15:20:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:59.198 15:20:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:59.198 15:20:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:59.198 15:20:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:59.198 15:20:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:59.198 15:20:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:59.198 15:20:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:59.198 15:20:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:59.198 15:20:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:59.198 15:20:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:59.198 15:20:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:59.198 15:20:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.198 15:20:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:59.198 15:20:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.198 15:20:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:59.198 "name": "raid_bdev1", 00:12:59.198 "uuid": "81216027-e833-4773-bcf2-c0964189b51a", 00:12:59.198 "strip_size_kb": 0, 00:12:59.198 "state": "online", 00:12:59.198 "raid_level": "raid1", 00:12:59.198 "superblock": true, 00:12:59.198 "num_base_bdevs": 2, 00:12:59.198 "num_base_bdevs_discovered": 1, 00:12:59.198 "num_base_bdevs_operational": 1, 00:12:59.198 "base_bdevs_list": [ 00:12:59.198 { 00:12:59.198 "name": null, 00:12:59.198 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:59.198 "is_configured": false, 00:12:59.198 "data_offset": 0, 00:12:59.198 "data_size": 63488 00:12:59.198 }, 00:12:59.198 { 00:12:59.198 "name": "BaseBdev2", 00:12:59.198 "uuid": "22a31d58-cfce-5ed2-a721-e11dea7e5cdf", 00:12:59.198 "is_configured": true, 00:12:59.198 "data_offset": 2048, 00:12:59.198 "data_size": 63488 00:12:59.198 } 00:12:59.198 ] 00:12:59.198 }' 00:12:59.198 15:20:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:59.198 15:20:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:59.198 [2024-11-20 15:20:45.576895] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:12:59.198 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:59.198 Zero copy mechanism will not be used. 00:12:59.198 Running I/O for 60 seconds... 00:12:59.458 15:20:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:59.458 15:20:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.458 15:20:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:59.458 [2024-11-20 15:20:45.889349] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:59.458 15:20:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.458 15:20:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:12:59.716 [2024-11-20 15:20:45.945878] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:12:59.716 [2024-11-20 15:20:45.948015] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:59.716 [2024-11-20 15:20:46.074030] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:59.716 [2024-11-20 15:20:46.074548] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:59.975 [2024-11-20 15:20:46.201006] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:59.975 [2024-11-20 15:20:46.201298] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:00.234 [2024-11-20 15:20:46.545978] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:00.234 186.00 IOPS, 558.00 MiB/s [2024-11-20T15:20:46.716Z] [2024-11-20 15:20:46.666816] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:00.234 [2024-11-20 15:20:46.667137] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:00.493 15:20:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:00.493 15:20:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:00.493 15:20:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:00.493 15:20:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:00.493 15:20:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:00.493 15:20:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:00.493 15:20:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:00.493 15:20:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.493 15:20:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:00.493 15:20:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.752 15:20:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:00.752 "name": "raid_bdev1", 00:13:00.752 "uuid": "81216027-e833-4773-bcf2-c0964189b51a", 00:13:00.752 "strip_size_kb": 0, 00:13:00.752 "state": "online", 00:13:00.752 "raid_level": "raid1", 00:13:00.752 "superblock": true, 00:13:00.752 "num_base_bdevs": 2, 00:13:00.752 "num_base_bdevs_discovered": 2, 00:13:00.752 "num_base_bdevs_operational": 2, 00:13:00.752 "process": { 00:13:00.752 "type": "rebuild", 00:13:00.752 "target": "spare", 00:13:00.752 "progress": { 00:13:00.752 "blocks": 12288, 00:13:00.752 "percent": 19 00:13:00.752 } 00:13:00.752 }, 00:13:00.752 "base_bdevs_list": [ 00:13:00.752 { 00:13:00.752 "name": "spare", 00:13:00.752 "uuid": "895a56d6-bbd2-5c20-b5af-1795ea771c20", 00:13:00.752 "is_configured": true, 00:13:00.752 "data_offset": 2048, 00:13:00.752 "data_size": 63488 00:13:00.752 }, 00:13:00.752 { 00:13:00.752 "name": "BaseBdev2", 00:13:00.752 "uuid": "22a31d58-cfce-5ed2-a721-e11dea7e5cdf", 00:13:00.752 "is_configured": true, 00:13:00.752 "data_offset": 2048, 00:13:00.752 "data_size": 63488 00:13:00.752 } 00:13:00.752 ] 00:13:00.752 }' 00:13:00.753 15:20:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:00.753 [2024-11-20 15:20:47.011198] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:13:00.753 15:20:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:00.753 15:20:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:00.753 15:20:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:00.753 15:20:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:00.753 15:20:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.753 15:20:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:00.753 [2024-11-20 15:20:47.091208] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:00.753 [2024-11-20 15:20:47.131784] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:13:00.753 [2024-11-20 15:20:47.233371] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:01.012 [2024-11-20 15:20:47.236109] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:01.012 [2024-11-20 15:20:47.236288] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:01.012 [2024-11-20 15:20:47.236334] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:01.012 [2024-11-20 15:20:47.279927] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:13:01.012 15:20:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.012 15:20:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:01.012 15:20:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:01.012 15:20:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:01.012 15:20:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:01.012 15:20:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:01.012 15:20:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:01.012 15:20:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:01.012 15:20:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:01.012 15:20:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:01.012 15:20:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:01.012 15:20:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:01.012 15:20:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:01.012 15:20:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.012 15:20:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:01.012 15:20:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.012 15:20:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:01.012 "name": "raid_bdev1", 00:13:01.012 "uuid": "81216027-e833-4773-bcf2-c0964189b51a", 00:13:01.012 "strip_size_kb": 0, 00:13:01.012 "state": "online", 00:13:01.012 "raid_level": "raid1", 00:13:01.012 "superblock": true, 00:13:01.012 "num_base_bdevs": 2, 00:13:01.012 "num_base_bdevs_discovered": 1, 00:13:01.012 "num_base_bdevs_operational": 1, 00:13:01.012 "base_bdevs_list": [ 00:13:01.012 { 00:13:01.012 "name": null, 00:13:01.012 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:01.012 "is_configured": false, 00:13:01.012 "data_offset": 0, 00:13:01.012 "data_size": 63488 00:13:01.012 }, 00:13:01.012 { 00:13:01.012 "name": "BaseBdev2", 00:13:01.012 "uuid": "22a31d58-cfce-5ed2-a721-e11dea7e5cdf", 00:13:01.012 "is_configured": true, 00:13:01.012 "data_offset": 2048, 00:13:01.012 "data_size": 63488 00:13:01.012 } 00:13:01.012 ] 00:13:01.012 }' 00:13:01.012 15:20:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:01.012 15:20:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:01.272 158.00 IOPS, 474.00 MiB/s [2024-11-20T15:20:47.754Z] 15:20:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:01.272 15:20:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:01.272 15:20:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:01.272 15:20:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:01.272 15:20:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:01.272 15:20:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:01.272 15:20:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.272 15:20:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:01.272 15:20:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:01.272 15:20:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.272 15:20:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:01.272 "name": "raid_bdev1", 00:13:01.272 "uuid": "81216027-e833-4773-bcf2-c0964189b51a", 00:13:01.272 "strip_size_kb": 0, 00:13:01.272 "state": "online", 00:13:01.272 "raid_level": "raid1", 00:13:01.272 "superblock": true, 00:13:01.272 "num_base_bdevs": 2, 00:13:01.272 "num_base_bdevs_discovered": 1, 00:13:01.272 "num_base_bdevs_operational": 1, 00:13:01.272 "base_bdevs_list": [ 00:13:01.272 { 00:13:01.272 "name": null, 00:13:01.272 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:01.272 "is_configured": false, 00:13:01.272 "data_offset": 0, 00:13:01.272 "data_size": 63488 00:13:01.272 }, 00:13:01.272 { 00:13:01.272 "name": "BaseBdev2", 00:13:01.272 "uuid": "22a31d58-cfce-5ed2-a721-e11dea7e5cdf", 00:13:01.272 "is_configured": true, 00:13:01.272 "data_offset": 2048, 00:13:01.272 "data_size": 63488 00:13:01.272 } 00:13:01.272 ] 00:13:01.272 }' 00:13:01.531 15:20:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:01.531 15:20:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:01.531 15:20:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:01.531 15:20:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:01.531 15:20:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:01.531 15:20:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.531 15:20:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:01.531 [2024-11-20 15:20:47.845789] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:01.531 15:20:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.531 15:20:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:01.531 [2024-11-20 15:20:47.901651] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:13:01.531 [2024-11-20 15:20:47.903782] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:01.789 [2024-11-20 15:20:48.016081] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:01.789 [2024-11-20 15:20:48.016752] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:01.789 [2024-11-20 15:20:48.246405] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:02.356 [2024-11-20 15:20:48.578267] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:02.356 157.33 IOPS, 472.00 MiB/s [2024-11-20T15:20:48.838Z] [2024-11-20 15:20:48.799410] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:02.356 [2024-11-20 15:20:48.799928] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:02.615 15:20:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:02.615 15:20:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:02.615 15:20:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:02.615 15:20:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:02.615 15:20:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:02.615 15:20:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:02.615 15:20:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:02.615 15:20:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.615 15:20:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:02.615 15:20:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.615 15:20:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:02.615 "name": "raid_bdev1", 00:13:02.615 "uuid": "81216027-e833-4773-bcf2-c0964189b51a", 00:13:02.615 "strip_size_kb": 0, 00:13:02.615 "state": "online", 00:13:02.615 "raid_level": "raid1", 00:13:02.615 "superblock": true, 00:13:02.615 "num_base_bdevs": 2, 00:13:02.615 "num_base_bdevs_discovered": 2, 00:13:02.615 "num_base_bdevs_operational": 2, 00:13:02.615 "process": { 00:13:02.615 "type": "rebuild", 00:13:02.615 "target": "spare", 00:13:02.615 "progress": { 00:13:02.615 "blocks": 10240, 00:13:02.615 "percent": 16 00:13:02.615 } 00:13:02.615 }, 00:13:02.615 "base_bdevs_list": [ 00:13:02.615 { 00:13:02.615 "name": "spare", 00:13:02.615 "uuid": "895a56d6-bbd2-5c20-b5af-1795ea771c20", 00:13:02.615 "is_configured": true, 00:13:02.615 "data_offset": 2048, 00:13:02.615 "data_size": 63488 00:13:02.615 }, 00:13:02.615 { 00:13:02.615 "name": "BaseBdev2", 00:13:02.615 "uuid": "22a31d58-cfce-5ed2-a721-e11dea7e5cdf", 00:13:02.615 "is_configured": true, 00:13:02.615 "data_offset": 2048, 00:13:02.615 "data_size": 63488 00:13:02.615 } 00:13:02.615 ] 00:13:02.615 }' 00:13:02.615 15:20:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:02.615 15:20:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:02.615 15:20:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:02.615 15:20:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:02.615 15:20:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:13:02.615 15:20:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:13:02.615 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:13:02.615 15:20:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:13:02.615 15:20:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:13:02.615 15:20:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:13:02.615 15:20:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=413 00:13:02.615 15:20:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:02.615 15:20:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:02.615 15:20:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:02.615 15:20:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:02.615 15:20:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:02.615 15:20:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:02.615 15:20:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:02.615 15:20:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:02.615 15:20:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.615 15:20:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:02.615 15:20:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.615 15:20:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:02.615 "name": "raid_bdev1", 00:13:02.615 "uuid": "81216027-e833-4773-bcf2-c0964189b51a", 00:13:02.615 "strip_size_kb": 0, 00:13:02.615 "state": "online", 00:13:02.615 "raid_level": "raid1", 00:13:02.615 "superblock": true, 00:13:02.615 "num_base_bdevs": 2, 00:13:02.615 "num_base_bdevs_discovered": 2, 00:13:02.615 "num_base_bdevs_operational": 2, 00:13:02.615 "process": { 00:13:02.615 "type": "rebuild", 00:13:02.615 "target": "spare", 00:13:02.615 "progress": { 00:13:02.615 "blocks": 12288, 00:13:02.615 "percent": 19 00:13:02.615 } 00:13:02.615 }, 00:13:02.615 "base_bdevs_list": [ 00:13:02.615 { 00:13:02.615 "name": "spare", 00:13:02.615 "uuid": "895a56d6-bbd2-5c20-b5af-1795ea771c20", 00:13:02.615 "is_configured": true, 00:13:02.615 "data_offset": 2048, 00:13:02.615 "data_size": 63488 00:13:02.615 }, 00:13:02.615 { 00:13:02.615 "name": "BaseBdev2", 00:13:02.615 "uuid": "22a31d58-cfce-5ed2-a721-e11dea7e5cdf", 00:13:02.615 "is_configured": true, 00:13:02.615 "data_offset": 2048, 00:13:02.615 "data_size": 63488 00:13:02.615 } 00:13:02.615 ] 00:13:02.615 }' 00:13:02.615 15:20:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:02.881 15:20:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:02.881 15:20:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:02.881 [2024-11-20 15:20:49.134104] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:13:02.881 [2024-11-20 15:20:49.134614] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:13:02.881 15:20:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:02.881 15:20:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:02.881 [2024-11-20 15:20:49.348622] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:13:03.402 132.75 IOPS, 398.25 MiB/s [2024-11-20T15:20:49.884Z] [2024-11-20 15:20:49.675089] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:13:03.402 [2024-11-20 15:20:49.675355] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:13:03.759 15:20:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:03.759 15:20:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:03.759 15:20:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:03.759 15:20:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:03.759 15:20:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:03.759 15:20:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:03.759 15:20:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:03.759 15:20:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:03.759 15:20:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.759 15:20:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:03.759 [2024-11-20 15:20:50.164372] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:13:03.759 15:20:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.048 15:20:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:04.048 "name": "raid_bdev1", 00:13:04.048 "uuid": "81216027-e833-4773-bcf2-c0964189b51a", 00:13:04.048 "strip_size_kb": 0, 00:13:04.048 "state": "online", 00:13:04.048 "raid_level": "raid1", 00:13:04.048 "superblock": true, 00:13:04.048 "num_base_bdevs": 2, 00:13:04.048 "num_base_bdevs_discovered": 2, 00:13:04.048 "num_base_bdevs_operational": 2, 00:13:04.048 "process": { 00:13:04.048 "type": "rebuild", 00:13:04.048 "target": "spare", 00:13:04.048 "progress": { 00:13:04.048 "blocks": 26624, 00:13:04.048 "percent": 41 00:13:04.048 } 00:13:04.048 }, 00:13:04.048 "base_bdevs_list": [ 00:13:04.048 { 00:13:04.048 "name": "spare", 00:13:04.048 "uuid": "895a56d6-bbd2-5c20-b5af-1795ea771c20", 00:13:04.048 "is_configured": true, 00:13:04.048 "data_offset": 2048, 00:13:04.048 "data_size": 63488 00:13:04.048 }, 00:13:04.048 { 00:13:04.048 "name": "BaseBdev2", 00:13:04.048 "uuid": "22a31d58-cfce-5ed2-a721-e11dea7e5cdf", 00:13:04.048 "is_configured": true, 00:13:04.048 "data_offset": 2048, 00:13:04.048 "data_size": 63488 00:13:04.048 } 00:13:04.048 ] 00:13:04.048 }' 00:13:04.048 15:20:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:04.048 15:20:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:04.048 15:20:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:04.048 15:20:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:04.048 15:20:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:04.567 118.00 IOPS, 354.00 MiB/s [2024-11-20T15:20:51.049Z] [2024-11-20 15:20:50.912640] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:13:04.567 [2024-11-20 15:20:50.913159] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:13:04.826 15:20:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:04.826 15:20:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:04.826 15:20:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:04.826 15:20:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:04.826 15:20:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:04.826 15:20:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:04.826 15:20:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:04.826 15:20:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:04.826 15:20:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.826 15:20:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:05.086 15:20:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.086 15:20:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:05.086 "name": "raid_bdev1", 00:13:05.086 "uuid": "81216027-e833-4773-bcf2-c0964189b51a", 00:13:05.086 "strip_size_kb": 0, 00:13:05.086 "state": "online", 00:13:05.086 "raid_level": "raid1", 00:13:05.086 "superblock": true, 00:13:05.086 "num_base_bdevs": 2, 00:13:05.086 "num_base_bdevs_discovered": 2, 00:13:05.086 "num_base_bdevs_operational": 2, 00:13:05.086 "process": { 00:13:05.086 "type": "rebuild", 00:13:05.086 "target": "spare", 00:13:05.086 "progress": { 00:13:05.086 "blocks": 45056, 00:13:05.086 "percent": 70 00:13:05.086 } 00:13:05.086 }, 00:13:05.086 "base_bdevs_list": [ 00:13:05.086 { 00:13:05.086 "name": "spare", 00:13:05.086 "uuid": "895a56d6-bbd2-5c20-b5af-1795ea771c20", 00:13:05.086 "is_configured": true, 00:13:05.086 "data_offset": 2048, 00:13:05.086 "data_size": 63488 00:13:05.086 }, 00:13:05.086 { 00:13:05.086 "name": "BaseBdev2", 00:13:05.086 "uuid": "22a31d58-cfce-5ed2-a721-e11dea7e5cdf", 00:13:05.086 "is_configured": true, 00:13:05.086 "data_offset": 2048, 00:13:05.086 "data_size": 63488 00:13:05.086 } 00:13:05.086 ] 00:13:05.086 }' 00:13:05.086 15:20:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:05.086 15:20:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:05.086 15:20:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:05.086 [2024-11-20 15:20:51.380626] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:13:05.086 15:20:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:05.086 15:20:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:05.345 106.33 IOPS, 319.00 MiB/s [2024-11-20T15:20:51.827Z] [2024-11-20 15:20:51.818323] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:13:06.283 15:20:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:06.283 15:20:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:06.283 15:20:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:06.283 15:20:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:06.283 15:20:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:06.283 15:20:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:06.283 15:20:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:06.283 15:20:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.283 15:20:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:06.283 15:20:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:06.283 15:20:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.283 15:20:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:06.283 "name": "raid_bdev1", 00:13:06.283 "uuid": "81216027-e833-4773-bcf2-c0964189b51a", 00:13:06.283 "strip_size_kb": 0, 00:13:06.283 "state": "online", 00:13:06.283 "raid_level": "raid1", 00:13:06.283 "superblock": true, 00:13:06.283 "num_base_bdevs": 2, 00:13:06.283 "num_base_bdevs_discovered": 2, 00:13:06.283 "num_base_bdevs_operational": 2, 00:13:06.283 "process": { 00:13:06.283 "type": "rebuild", 00:13:06.283 "target": "spare", 00:13:06.283 "progress": { 00:13:06.283 "blocks": 61440, 00:13:06.283 "percent": 96 00:13:06.283 } 00:13:06.283 }, 00:13:06.283 "base_bdevs_list": [ 00:13:06.283 { 00:13:06.283 "name": "spare", 00:13:06.283 "uuid": "895a56d6-bbd2-5c20-b5af-1795ea771c20", 00:13:06.283 "is_configured": true, 00:13:06.283 "data_offset": 2048, 00:13:06.283 "data_size": 63488 00:13:06.283 }, 00:13:06.283 { 00:13:06.283 "name": "BaseBdev2", 00:13:06.283 "uuid": "22a31d58-cfce-5ed2-a721-e11dea7e5cdf", 00:13:06.283 "is_configured": true, 00:13:06.283 "data_offset": 2048, 00:13:06.283 "data_size": 63488 00:13:06.283 } 00:13:06.283 ] 00:13:06.283 }' 00:13:06.283 15:20:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:06.283 [2024-11-20 15:20:52.483650] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:06.283 15:20:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:06.283 15:20:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:06.283 15:20:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:06.283 15:20:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:06.283 [2024-11-20 15:20:52.583594] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:06.284 [2024-11-20 15:20:52.592512] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:07.220 96.86 IOPS, 290.57 MiB/s [2024-11-20T15:20:53.702Z] 15:20:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:07.220 15:20:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:07.220 15:20:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:07.220 15:20:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:07.220 15:20:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:07.220 15:20:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:07.220 15:20:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:07.220 15:20:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:07.220 15:20:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.220 15:20:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:07.220 89.25 IOPS, 267.75 MiB/s [2024-11-20T15:20:53.702Z] 15:20:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.220 15:20:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:07.220 "name": "raid_bdev1", 00:13:07.220 "uuid": "81216027-e833-4773-bcf2-c0964189b51a", 00:13:07.220 "strip_size_kb": 0, 00:13:07.220 "state": "online", 00:13:07.220 "raid_level": "raid1", 00:13:07.220 "superblock": true, 00:13:07.220 "num_base_bdevs": 2, 00:13:07.220 "num_base_bdevs_discovered": 2, 00:13:07.220 "num_base_bdevs_operational": 2, 00:13:07.220 "base_bdevs_list": [ 00:13:07.220 { 00:13:07.220 "name": "spare", 00:13:07.220 "uuid": "895a56d6-bbd2-5c20-b5af-1795ea771c20", 00:13:07.220 "is_configured": true, 00:13:07.220 "data_offset": 2048, 00:13:07.220 "data_size": 63488 00:13:07.220 }, 00:13:07.220 { 00:13:07.220 "name": "BaseBdev2", 00:13:07.220 "uuid": "22a31d58-cfce-5ed2-a721-e11dea7e5cdf", 00:13:07.220 "is_configured": true, 00:13:07.220 "data_offset": 2048, 00:13:07.220 "data_size": 63488 00:13:07.220 } 00:13:07.220 ] 00:13:07.220 }' 00:13:07.220 15:20:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:07.220 15:20:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:07.220 15:20:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:07.480 15:20:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:07.480 15:20:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:13:07.480 15:20:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:07.480 15:20:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:07.480 15:20:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:07.480 15:20:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:07.480 15:20:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:07.480 15:20:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:07.480 15:20:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:07.480 15:20:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.480 15:20:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:07.480 15:20:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.480 15:20:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:07.480 "name": "raid_bdev1", 00:13:07.480 "uuid": "81216027-e833-4773-bcf2-c0964189b51a", 00:13:07.480 "strip_size_kb": 0, 00:13:07.480 "state": "online", 00:13:07.480 "raid_level": "raid1", 00:13:07.480 "superblock": true, 00:13:07.480 "num_base_bdevs": 2, 00:13:07.480 "num_base_bdevs_discovered": 2, 00:13:07.480 "num_base_bdevs_operational": 2, 00:13:07.480 "base_bdevs_list": [ 00:13:07.480 { 00:13:07.480 "name": "spare", 00:13:07.480 "uuid": "895a56d6-bbd2-5c20-b5af-1795ea771c20", 00:13:07.480 "is_configured": true, 00:13:07.480 "data_offset": 2048, 00:13:07.480 "data_size": 63488 00:13:07.480 }, 00:13:07.480 { 00:13:07.480 "name": "BaseBdev2", 00:13:07.480 "uuid": "22a31d58-cfce-5ed2-a721-e11dea7e5cdf", 00:13:07.480 "is_configured": true, 00:13:07.480 "data_offset": 2048, 00:13:07.480 "data_size": 63488 00:13:07.480 } 00:13:07.480 ] 00:13:07.480 }' 00:13:07.480 15:20:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:07.480 15:20:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:07.480 15:20:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:07.480 15:20:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:07.480 15:20:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:07.480 15:20:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:07.480 15:20:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:07.480 15:20:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:07.480 15:20:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:07.480 15:20:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:07.480 15:20:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:07.480 15:20:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:07.480 15:20:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:07.480 15:20:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:07.480 15:20:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:07.480 15:20:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.480 15:20:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:07.480 15:20:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:07.480 15:20:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.480 15:20:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:07.480 "name": "raid_bdev1", 00:13:07.480 "uuid": "81216027-e833-4773-bcf2-c0964189b51a", 00:13:07.480 "strip_size_kb": 0, 00:13:07.480 "state": "online", 00:13:07.480 "raid_level": "raid1", 00:13:07.480 "superblock": true, 00:13:07.480 "num_base_bdevs": 2, 00:13:07.480 "num_base_bdevs_discovered": 2, 00:13:07.480 "num_base_bdevs_operational": 2, 00:13:07.480 "base_bdevs_list": [ 00:13:07.480 { 00:13:07.480 "name": "spare", 00:13:07.480 "uuid": "895a56d6-bbd2-5c20-b5af-1795ea771c20", 00:13:07.480 "is_configured": true, 00:13:07.480 "data_offset": 2048, 00:13:07.480 "data_size": 63488 00:13:07.480 }, 00:13:07.480 { 00:13:07.480 "name": "BaseBdev2", 00:13:07.480 "uuid": "22a31d58-cfce-5ed2-a721-e11dea7e5cdf", 00:13:07.480 "is_configured": true, 00:13:07.480 "data_offset": 2048, 00:13:07.480 "data_size": 63488 00:13:07.480 } 00:13:07.480 ] 00:13:07.480 }' 00:13:07.480 15:20:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:07.480 15:20:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:08.048 15:20:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:08.048 15:20:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.048 15:20:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:08.048 [2024-11-20 15:20:54.305430] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:08.048 [2024-11-20 15:20:54.305475] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:08.048 00:13:08.048 Latency(us) 00:13:08.048 [2024-11-20T15:20:54.530Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:08.048 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:13:08.048 raid_bdev1 : 8.78 83.44 250.33 0.00 0.00 17238.28 302.68 114543.24 00:13:08.048 [2024-11-20T15:20:54.530Z] =================================================================================================================== 00:13:08.048 [2024-11-20T15:20:54.530Z] Total : 83.44 250.33 0.00 0.00 17238.28 302.68 114543.24 00:13:08.048 { 00:13:08.048 "results": [ 00:13:08.048 { 00:13:08.048 "job": "raid_bdev1", 00:13:08.048 "core_mask": "0x1", 00:13:08.048 "workload": "randrw", 00:13:08.048 "percentage": 50, 00:13:08.048 "status": "finished", 00:13:08.048 "queue_depth": 2, 00:13:08.048 "io_size": 3145728, 00:13:08.048 "runtime": 8.784408, 00:13:08.048 "iops": 83.44330090314567, 00:13:08.049 "mibps": 250.32990270943702, 00:13:08.049 "io_failed": 0, 00:13:08.049 "io_timeout": 0, 00:13:08.049 "avg_latency_us": 17238.276447673365, 00:13:08.049 "min_latency_us": 302.67630522088353, 00:13:08.049 "max_latency_us": 114543.24176706828 00:13:08.049 } 00:13:08.049 ], 00:13:08.049 "core_count": 1 00:13:08.049 } 00:13:08.049 [2024-11-20 15:20:54.372712] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:08.049 [2024-11-20 15:20:54.372801] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:08.049 [2024-11-20 15:20:54.372882] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:08.049 [2024-11-20 15:20:54.372897] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:08.049 15:20:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.049 15:20:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:08.049 15:20:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:13:08.049 15:20:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.049 15:20:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:08.049 15:20:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.049 15:20:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:08.049 15:20:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:08.049 15:20:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:13:08.049 15:20:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:13:08.049 15:20:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:08.049 15:20:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:13:08.049 15:20:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:08.049 15:20:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:08.049 15:20:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:08.049 15:20:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:13:08.049 15:20:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:08.049 15:20:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:08.049 15:20:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:13:08.339 /dev/nbd0 00:13:08.339 15:20:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:08.339 15:20:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:08.339 15:20:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:08.339 15:20:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:13:08.339 15:20:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:08.339 15:20:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:08.339 15:20:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:08.339 15:20:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:13:08.339 15:20:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:08.339 15:20:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:08.339 15:20:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:08.339 1+0 records in 00:13:08.339 1+0 records out 00:13:08.339 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000371545 s, 11.0 MB/s 00:13:08.339 15:20:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:08.339 15:20:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:13:08.339 15:20:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:08.339 15:20:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:08.339 15:20:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:13:08.339 15:20:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:08.339 15:20:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:08.339 15:20:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:13:08.339 15:20:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:13:08.339 15:20:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:13:08.339 15:20:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:08.339 15:20:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:13:08.339 15:20:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:08.339 15:20:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:13:08.339 15:20:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:08.339 15:20:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:13:08.339 15:20:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:08.339 15:20:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:08.339 15:20:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:13:08.604 /dev/nbd1 00:13:08.604 15:20:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:08.604 15:20:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:08.604 15:20:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:13:08.604 15:20:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:13:08.604 15:20:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:08.604 15:20:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:08.604 15:20:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:13:08.604 15:20:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:13:08.604 15:20:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:08.604 15:20:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:08.604 15:20:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:08.604 1+0 records in 00:13:08.604 1+0 records out 00:13:08.604 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000424564 s, 9.6 MB/s 00:13:08.604 15:20:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:08.604 15:20:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:13:08.604 15:20:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:08.604 15:20:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:08.604 15:20:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:13:08.604 15:20:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:08.604 15:20:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:08.604 15:20:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:13:08.863 15:20:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:13:08.863 15:20:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:08.863 15:20:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:13:08.863 15:20:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:08.863 15:20:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:13:08.863 15:20:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:08.863 15:20:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:09.123 15:20:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:09.123 15:20:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:09.123 15:20:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:09.123 15:20:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:09.123 15:20:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:09.123 15:20:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:09.123 15:20:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:13:09.123 15:20:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:09.123 15:20:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:09.123 15:20:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:09.123 15:20:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:09.123 15:20:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:09.123 15:20:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:13:09.123 15:20:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:09.123 15:20:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:09.382 15:20:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:09.382 15:20:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:09.382 15:20:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:09.382 15:20:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:09.382 15:20:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:09.382 15:20:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:09.382 15:20:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:13:09.382 15:20:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:09.382 15:20:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:13:09.382 15:20:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:13:09.382 15:20:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.382 15:20:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:09.382 15:20:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.382 15:20:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:09.382 15:20:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.382 15:20:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:09.382 [2024-11-20 15:20:55.641206] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:09.382 [2024-11-20 15:20:55.641279] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:09.382 [2024-11-20 15:20:55.641307] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:13:09.382 [2024-11-20 15:20:55.641322] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:09.382 [2024-11-20 15:20:55.643952] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:09.382 [2024-11-20 15:20:55.643997] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:09.382 [2024-11-20 15:20:55.644105] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:09.382 [2024-11-20 15:20:55.644161] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:09.382 [2024-11-20 15:20:55.644295] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:09.382 spare 00:13:09.382 15:20:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.382 15:20:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:13:09.382 15:20:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.382 15:20:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:09.382 [2024-11-20 15:20:55.744243] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:13:09.382 [2024-11-20 15:20:55.744513] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:09.382 [2024-11-20 15:20:55.744955] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b0d0 00:13:09.382 [2024-11-20 15:20:55.745264] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:13:09.382 [2024-11-20 15:20:55.745363] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:13:09.382 [2024-11-20 15:20:55.745607] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:09.382 15:20:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.382 15:20:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:09.382 15:20:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:09.382 15:20:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:09.382 15:20:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:09.382 15:20:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:09.382 15:20:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:09.382 15:20:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:09.382 15:20:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:09.382 15:20:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:09.382 15:20:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:09.382 15:20:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:09.382 15:20:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.382 15:20:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:09.382 15:20:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:09.382 15:20:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.382 15:20:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:09.382 "name": "raid_bdev1", 00:13:09.382 "uuid": "81216027-e833-4773-bcf2-c0964189b51a", 00:13:09.382 "strip_size_kb": 0, 00:13:09.382 "state": "online", 00:13:09.382 "raid_level": "raid1", 00:13:09.382 "superblock": true, 00:13:09.382 "num_base_bdevs": 2, 00:13:09.382 "num_base_bdevs_discovered": 2, 00:13:09.382 "num_base_bdevs_operational": 2, 00:13:09.382 "base_bdevs_list": [ 00:13:09.382 { 00:13:09.382 "name": "spare", 00:13:09.382 "uuid": "895a56d6-bbd2-5c20-b5af-1795ea771c20", 00:13:09.382 "is_configured": true, 00:13:09.382 "data_offset": 2048, 00:13:09.382 "data_size": 63488 00:13:09.382 }, 00:13:09.382 { 00:13:09.382 "name": "BaseBdev2", 00:13:09.382 "uuid": "22a31d58-cfce-5ed2-a721-e11dea7e5cdf", 00:13:09.382 "is_configured": true, 00:13:09.382 "data_offset": 2048, 00:13:09.382 "data_size": 63488 00:13:09.382 } 00:13:09.382 ] 00:13:09.383 }' 00:13:09.383 15:20:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:09.383 15:20:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:09.950 15:20:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:09.950 15:20:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:09.950 15:20:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:09.950 15:20:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:09.950 15:20:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:09.950 15:20:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:09.950 15:20:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:09.950 15:20:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.950 15:20:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:09.950 15:20:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.950 15:20:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:09.950 "name": "raid_bdev1", 00:13:09.950 "uuid": "81216027-e833-4773-bcf2-c0964189b51a", 00:13:09.950 "strip_size_kb": 0, 00:13:09.950 "state": "online", 00:13:09.950 "raid_level": "raid1", 00:13:09.950 "superblock": true, 00:13:09.950 "num_base_bdevs": 2, 00:13:09.950 "num_base_bdevs_discovered": 2, 00:13:09.950 "num_base_bdevs_operational": 2, 00:13:09.950 "base_bdevs_list": [ 00:13:09.950 { 00:13:09.950 "name": "spare", 00:13:09.950 "uuid": "895a56d6-bbd2-5c20-b5af-1795ea771c20", 00:13:09.950 "is_configured": true, 00:13:09.950 "data_offset": 2048, 00:13:09.950 "data_size": 63488 00:13:09.950 }, 00:13:09.950 { 00:13:09.950 "name": "BaseBdev2", 00:13:09.950 "uuid": "22a31d58-cfce-5ed2-a721-e11dea7e5cdf", 00:13:09.950 "is_configured": true, 00:13:09.950 "data_offset": 2048, 00:13:09.950 "data_size": 63488 00:13:09.950 } 00:13:09.950 ] 00:13:09.950 }' 00:13:09.950 15:20:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:09.950 15:20:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:09.950 15:20:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:09.950 15:20:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:09.950 15:20:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:09.950 15:20:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:13:09.950 15:20:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.950 15:20:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:09.950 15:20:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.950 15:20:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:13:09.950 15:20:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:09.951 15:20:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.951 15:20:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:09.951 [2024-11-20 15:20:56.352830] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:09.951 15:20:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.951 15:20:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:09.951 15:20:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:09.951 15:20:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:09.951 15:20:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:09.951 15:20:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:09.951 15:20:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:09.951 15:20:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:09.951 15:20:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:09.951 15:20:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:09.951 15:20:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:09.951 15:20:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:09.951 15:20:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:09.951 15:20:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.951 15:20:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:09.951 15:20:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.951 15:20:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:09.951 "name": "raid_bdev1", 00:13:09.951 "uuid": "81216027-e833-4773-bcf2-c0964189b51a", 00:13:09.951 "strip_size_kb": 0, 00:13:09.951 "state": "online", 00:13:09.951 "raid_level": "raid1", 00:13:09.951 "superblock": true, 00:13:09.951 "num_base_bdevs": 2, 00:13:09.951 "num_base_bdevs_discovered": 1, 00:13:09.951 "num_base_bdevs_operational": 1, 00:13:09.951 "base_bdevs_list": [ 00:13:09.951 { 00:13:09.951 "name": null, 00:13:09.951 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:09.951 "is_configured": false, 00:13:09.951 "data_offset": 0, 00:13:09.951 "data_size": 63488 00:13:09.951 }, 00:13:09.951 { 00:13:09.951 "name": "BaseBdev2", 00:13:09.951 "uuid": "22a31d58-cfce-5ed2-a721-e11dea7e5cdf", 00:13:09.951 "is_configured": true, 00:13:09.951 "data_offset": 2048, 00:13:09.951 "data_size": 63488 00:13:09.951 } 00:13:09.951 ] 00:13:09.951 }' 00:13:09.951 15:20:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:09.951 15:20:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:10.519 15:20:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:10.519 15:20:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.519 15:20:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:10.519 [2024-11-20 15:20:56.808262] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:10.519 [2024-11-20 15:20:56.808463] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:13:10.519 [2024-11-20 15:20:56.808479] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:10.519 [2024-11-20 15:20:56.808525] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:10.519 [2024-11-20 15:20:56.825641] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b1a0 00:13:10.519 15:20:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.519 15:20:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:13:10.519 [2024-11-20 15:20:56.827903] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:11.454 15:20:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:11.454 15:20:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:11.454 15:20:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:11.454 15:20:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:11.454 15:20:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:11.454 15:20:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:11.454 15:20:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.454 15:20:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:11.454 15:20:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:11.454 15:20:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.454 15:20:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:11.454 "name": "raid_bdev1", 00:13:11.454 "uuid": "81216027-e833-4773-bcf2-c0964189b51a", 00:13:11.454 "strip_size_kb": 0, 00:13:11.454 "state": "online", 00:13:11.454 "raid_level": "raid1", 00:13:11.454 "superblock": true, 00:13:11.454 "num_base_bdevs": 2, 00:13:11.454 "num_base_bdevs_discovered": 2, 00:13:11.454 "num_base_bdevs_operational": 2, 00:13:11.454 "process": { 00:13:11.454 "type": "rebuild", 00:13:11.454 "target": "spare", 00:13:11.454 "progress": { 00:13:11.454 "blocks": 20480, 00:13:11.454 "percent": 32 00:13:11.454 } 00:13:11.454 }, 00:13:11.454 "base_bdevs_list": [ 00:13:11.454 { 00:13:11.454 "name": "spare", 00:13:11.454 "uuid": "895a56d6-bbd2-5c20-b5af-1795ea771c20", 00:13:11.454 "is_configured": true, 00:13:11.454 "data_offset": 2048, 00:13:11.454 "data_size": 63488 00:13:11.454 }, 00:13:11.454 { 00:13:11.454 "name": "BaseBdev2", 00:13:11.454 "uuid": "22a31d58-cfce-5ed2-a721-e11dea7e5cdf", 00:13:11.454 "is_configured": true, 00:13:11.454 "data_offset": 2048, 00:13:11.454 "data_size": 63488 00:13:11.454 } 00:13:11.454 ] 00:13:11.454 }' 00:13:11.454 15:20:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:11.454 15:20:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:11.454 15:20:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:11.713 15:20:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:11.713 15:20:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:13:11.713 15:20:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.713 15:20:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:11.713 [2024-11-20 15:20:57.971507] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:11.713 [2024-11-20 15:20:58.033512] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:11.713 [2024-11-20 15:20:58.033588] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:11.713 [2024-11-20 15:20:58.033610] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:11.713 [2024-11-20 15:20:58.033619] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:11.713 15:20:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.713 15:20:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:11.713 15:20:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:11.713 15:20:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:11.713 15:20:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:11.713 15:20:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:11.713 15:20:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:11.713 15:20:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:11.713 15:20:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:11.713 15:20:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:11.713 15:20:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:11.713 15:20:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:11.713 15:20:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.713 15:20:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:11.713 15:20:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:11.713 15:20:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.713 15:20:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:11.713 "name": "raid_bdev1", 00:13:11.713 "uuid": "81216027-e833-4773-bcf2-c0964189b51a", 00:13:11.713 "strip_size_kb": 0, 00:13:11.713 "state": "online", 00:13:11.713 "raid_level": "raid1", 00:13:11.713 "superblock": true, 00:13:11.713 "num_base_bdevs": 2, 00:13:11.713 "num_base_bdevs_discovered": 1, 00:13:11.713 "num_base_bdevs_operational": 1, 00:13:11.713 "base_bdevs_list": [ 00:13:11.713 { 00:13:11.713 "name": null, 00:13:11.713 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:11.713 "is_configured": false, 00:13:11.713 "data_offset": 0, 00:13:11.713 "data_size": 63488 00:13:11.713 }, 00:13:11.713 { 00:13:11.713 "name": "BaseBdev2", 00:13:11.713 "uuid": "22a31d58-cfce-5ed2-a721-e11dea7e5cdf", 00:13:11.713 "is_configured": true, 00:13:11.713 "data_offset": 2048, 00:13:11.713 "data_size": 63488 00:13:11.713 } 00:13:11.713 ] 00:13:11.713 }' 00:13:11.713 15:20:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:11.713 15:20:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:12.280 15:20:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:12.280 15:20:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.280 15:20:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:12.280 [2024-11-20 15:20:58.502857] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:12.280 [2024-11-20 15:20:58.503098] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:12.280 [2024-11-20 15:20:58.503136] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:13:12.280 [2024-11-20 15:20:58.503149] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:12.280 [2024-11-20 15:20:58.503694] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:12.280 [2024-11-20 15:20:58.503724] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:12.280 [2024-11-20 15:20:58.503833] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:12.280 [2024-11-20 15:20:58.503859] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:13:12.280 [2024-11-20 15:20:58.503874] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:12.280 [2024-11-20 15:20:58.503905] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:12.280 [2024-11-20 15:20:58.520543] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b270 00:13:12.280 spare 00:13:12.280 15:20:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.280 15:20:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:13:12.280 [2024-11-20 15:20:58.523033] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:13.288 15:20:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:13.288 15:20:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:13.288 15:20:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:13.288 15:20:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:13.288 15:20:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:13.288 15:20:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:13.288 15:20:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:13.288 15:20:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.288 15:20:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:13.288 15:20:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.288 15:20:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:13.288 "name": "raid_bdev1", 00:13:13.288 "uuid": "81216027-e833-4773-bcf2-c0964189b51a", 00:13:13.288 "strip_size_kb": 0, 00:13:13.288 "state": "online", 00:13:13.288 "raid_level": "raid1", 00:13:13.288 "superblock": true, 00:13:13.288 "num_base_bdevs": 2, 00:13:13.288 "num_base_bdevs_discovered": 2, 00:13:13.288 "num_base_bdevs_operational": 2, 00:13:13.288 "process": { 00:13:13.288 "type": "rebuild", 00:13:13.288 "target": "spare", 00:13:13.288 "progress": { 00:13:13.288 "blocks": 20480, 00:13:13.288 "percent": 32 00:13:13.288 } 00:13:13.288 }, 00:13:13.288 "base_bdevs_list": [ 00:13:13.288 { 00:13:13.288 "name": "spare", 00:13:13.288 "uuid": "895a56d6-bbd2-5c20-b5af-1795ea771c20", 00:13:13.288 "is_configured": true, 00:13:13.288 "data_offset": 2048, 00:13:13.288 "data_size": 63488 00:13:13.288 }, 00:13:13.288 { 00:13:13.288 "name": "BaseBdev2", 00:13:13.288 "uuid": "22a31d58-cfce-5ed2-a721-e11dea7e5cdf", 00:13:13.288 "is_configured": true, 00:13:13.288 "data_offset": 2048, 00:13:13.288 "data_size": 63488 00:13:13.288 } 00:13:13.288 ] 00:13:13.288 }' 00:13:13.288 15:20:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:13.288 15:20:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:13.288 15:20:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:13.288 15:20:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:13.288 15:20:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:13:13.288 15:20:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.288 15:20:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:13.288 [2024-11-20 15:20:59.642958] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:13.288 [2024-11-20 15:20:59.728722] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:13.288 [2024-11-20 15:20:59.729022] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:13.288 [2024-11-20 15:20:59.729046] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:13.288 [2024-11-20 15:20:59.729064] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:13.548 15:20:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.548 15:20:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:13.548 15:20:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:13.548 15:20:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:13.548 15:20:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:13.548 15:20:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:13.548 15:20:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:13.548 15:20:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:13.548 15:20:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:13.548 15:20:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:13.548 15:20:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:13.548 15:20:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:13.548 15:20:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:13.548 15:20:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.548 15:20:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:13.548 15:20:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.548 15:20:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:13.548 "name": "raid_bdev1", 00:13:13.548 "uuid": "81216027-e833-4773-bcf2-c0964189b51a", 00:13:13.548 "strip_size_kb": 0, 00:13:13.548 "state": "online", 00:13:13.548 "raid_level": "raid1", 00:13:13.548 "superblock": true, 00:13:13.548 "num_base_bdevs": 2, 00:13:13.548 "num_base_bdevs_discovered": 1, 00:13:13.548 "num_base_bdevs_operational": 1, 00:13:13.548 "base_bdevs_list": [ 00:13:13.548 { 00:13:13.548 "name": null, 00:13:13.548 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:13.548 "is_configured": false, 00:13:13.548 "data_offset": 0, 00:13:13.548 "data_size": 63488 00:13:13.548 }, 00:13:13.548 { 00:13:13.548 "name": "BaseBdev2", 00:13:13.548 "uuid": "22a31d58-cfce-5ed2-a721-e11dea7e5cdf", 00:13:13.548 "is_configured": true, 00:13:13.548 "data_offset": 2048, 00:13:13.548 "data_size": 63488 00:13:13.548 } 00:13:13.548 ] 00:13:13.548 }' 00:13:13.548 15:20:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:13.548 15:20:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:13.808 15:21:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:13.808 15:21:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:13.808 15:21:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:13.808 15:21:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:13.808 15:21:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:13.808 15:21:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:13.808 15:21:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.808 15:21:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:13.808 15:21:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:13.808 15:21:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.808 15:21:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:13.808 "name": "raid_bdev1", 00:13:13.808 "uuid": "81216027-e833-4773-bcf2-c0964189b51a", 00:13:13.808 "strip_size_kb": 0, 00:13:13.808 "state": "online", 00:13:13.808 "raid_level": "raid1", 00:13:13.808 "superblock": true, 00:13:13.808 "num_base_bdevs": 2, 00:13:13.808 "num_base_bdevs_discovered": 1, 00:13:13.808 "num_base_bdevs_operational": 1, 00:13:13.808 "base_bdevs_list": [ 00:13:13.808 { 00:13:13.808 "name": null, 00:13:13.808 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:13.808 "is_configured": false, 00:13:13.808 "data_offset": 0, 00:13:13.808 "data_size": 63488 00:13:13.808 }, 00:13:13.808 { 00:13:13.808 "name": "BaseBdev2", 00:13:13.808 "uuid": "22a31d58-cfce-5ed2-a721-e11dea7e5cdf", 00:13:13.808 "is_configured": true, 00:13:13.808 "data_offset": 2048, 00:13:13.808 "data_size": 63488 00:13:13.808 } 00:13:13.808 ] 00:13:13.808 }' 00:13:13.808 15:21:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:13.808 15:21:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:13.808 15:21:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:14.066 15:21:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:14.066 15:21:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:13:14.066 15:21:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.066 15:21:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:14.066 15:21:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.066 15:21:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:14.066 15:21:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.066 15:21:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:14.066 [2024-11-20 15:21:00.324790] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:14.066 [2024-11-20 15:21:00.324986] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:14.066 [2024-11-20 15:21:00.325052] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:13:14.066 [2024-11-20 15:21:00.325142] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:14.066 [2024-11-20 15:21:00.325650] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:14.066 [2024-11-20 15:21:00.325690] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:14.066 [2024-11-20 15:21:00.325785] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:13:14.066 [2024-11-20 15:21:00.325807] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:13:14.066 [2024-11-20 15:21:00.325817] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:14.066 [2024-11-20 15:21:00.325832] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:13:14.066 BaseBdev1 00:13:14.066 15:21:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.066 15:21:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:13:15.009 15:21:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:15.009 15:21:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:15.009 15:21:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:15.009 15:21:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:15.009 15:21:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:15.009 15:21:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:15.009 15:21:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:15.009 15:21:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:15.009 15:21:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:15.009 15:21:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:15.009 15:21:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:15.009 15:21:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.009 15:21:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:15.009 15:21:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:15.009 15:21:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.009 15:21:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:15.009 "name": "raid_bdev1", 00:13:15.009 "uuid": "81216027-e833-4773-bcf2-c0964189b51a", 00:13:15.009 "strip_size_kb": 0, 00:13:15.009 "state": "online", 00:13:15.009 "raid_level": "raid1", 00:13:15.009 "superblock": true, 00:13:15.009 "num_base_bdevs": 2, 00:13:15.009 "num_base_bdevs_discovered": 1, 00:13:15.009 "num_base_bdevs_operational": 1, 00:13:15.009 "base_bdevs_list": [ 00:13:15.009 { 00:13:15.009 "name": null, 00:13:15.009 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:15.009 "is_configured": false, 00:13:15.009 "data_offset": 0, 00:13:15.009 "data_size": 63488 00:13:15.009 }, 00:13:15.009 { 00:13:15.009 "name": "BaseBdev2", 00:13:15.009 "uuid": "22a31d58-cfce-5ed2-a721-e11dea7e5cdf", 00:13:15.009 "is_configured": true, 00:13:15.009 "data_offset": 2048, 00:13:15.009 "data_size": 63488 00:13:15.009 } 00:13:15.009 ] 00:13:15.009 }' 00:13:15.009 15:21:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:15.009 15:21:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:15.579 15:21:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:15.579 15:21:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:15.579 15:21:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:15.579 15:21:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:15.579 15:21:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:15.579 15:21:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:15.579 15:21:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:15.579 15:21:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.579 15:21:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:15.579 15:21:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.579 15:21:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:15.579 "name": "raid_bdev1", 00:13:15.579 "uuid": "81216027-e833-4773-bcf2-c0964189b51a", 00:13:15.579 "strip_size_kb": 0, 00:13:15.579 "state": "online", 00:13:15.579 "raid_level": "raid1", 00:13:15.579 "superblock": true, 00:13:15.579 "num_base_bdevs": 2, 00:13:15.579 "num_base_bdevs_discovered": 1, 00:13:15.579 "num_base_bdevs_operational": 1, 00:13:15.579 "base_bdevs_list": [ 00:13:15.579 { 00:13:15.579 "name": null, 00:13:15.579 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:15.579 "is_configured": false, 00:13:15.579 "data_offset": 0, 00:13:15.579 "data_size": 63488 00:13:15.579 }, 00:13:15.579 { 00:13:15.579 "name": "BaseBdev2", 00:13:15.579 "uuid": "22a31d58-cfce-5ed2-a721-e11dea7e5cdf", 00:13:15.579 "is_configured": true, 00:13:15.579 "data_offset": 2048, 00:13:15.579 "data_size": 63488 00:13:15.579 } 00:13:15.579 ] 00:13:15.579 }' 00:13:15.579 15:21:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:15.579 15:21:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:15.579 15:21:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:15.579 15:21:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:15.579 15:21:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:15.579 15:21:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # local es=0 00:13:15.579 15:21:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:15.579 15:21:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:13:15.579 15:21:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:15.579 15:21:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:13:15.579 15:21:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:15.579 15:21:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:15.579 15:21:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.579 15:21:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:15.579 [2024-11-20 15:21:01.902892] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:15.579 [2024-11-20 15:21:01.903060] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:13:15.579 [2024-11-20 15:21:01.903075] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:15.579 request: 00:13:15.579 { 00:13:15.579 "base_bdev": "BaseBdev1", 00:13:15.579 "raid_bdev": "raid_bdev1", 00:13:15.579 "method": "bdev_raid_add_base_bdev", 00:13:15.579 "req_id": 1 00:13:15.579 } 00:13:15.579 Got JSON-RPC error response 00:13:15.579 response: 00:13:15.579 { 00:13:15.579 "code": -22, 00:13:15.579 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:13:15.579 } 00:13:15.579 15:21:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:13:15.579 15:21:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # es=1 00:13:15.579 15:21:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:15.579 15:21:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:15.579 15:21:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:15.579 15:21:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:13:16.513 15:21:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:16.513 15:21:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:16.513 15:21:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:16.513 15:21:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:16.513 15:21:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:16.513 15:21:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:16.513 15:21:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:16.513 15:21:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:16.513 15:21:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:16.513 15:21:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:16.513 15:21:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:16.513 15:21:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:16.513 15:21:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.513 15:21:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:16.513 15:21:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.513 15:21:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:16.513 "name": "raid_bdev1", 00:13:16.513 "uuid": "81216027-e833-4773-bcf2-c0964189b51a", 00:13:16.513 "strip_size_kb": 0, 00:13:16.513 "state": "online", 00:13:16.513 "raid_level": "raid1", 00:13:16.513 "superblock": true, 00:13:16.513 "num_base_bdevs": 2, 00:13:16.513 "num_base_bdevs_discovered": 1, 00:13:16.513 "num_base_bdevs_operational": 1, 00:13:16.513 "base_bdevs_list": [ 00:13:16.513 { 00:13:16.513 "name": null, 00:13:16.513 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:16.513 "is_configured": false, 00:13:16.513 "data_offset": 0, 00:13:16.513 "data_size": 63488 00:13:16.513 }, 00:13:16.513 { 00:13:16.513 "name": "BaseBdev2", 00:13:16.513 "uuid": "22a31d58-cfce-5ed2-a721-e11dea7e5cdf", 00:13:16.513 "is_configured": true, 00:13:16.513 "data_offset": 2048, 00:13:16.513 "data_size": 63488 00:13:16.513 } 00:13:16.513 ] 00:13:16.513 }' 00:13:16.513 15:21:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:16.513 15:21:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:17.079 15:21:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:17.079 15:21:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:17.079 15:21:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:17.079 15:21:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:17.079 15:21:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:17.079 15:21:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:17.079 15:21:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.079 15:21:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:17.079 15:21:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:17.079 15:21:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.079 15:21:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:17.079 "name": "raid_bdev1", 00:13:17.079 "uuid": "81216027-e833-4773-bcf2-c0964189b51a", 00:13:17.079 "strip_size_kb": 0, 00:13:17.079 "state": "online", 00:13:17.079 "raid_level": "raid1", 00:13:17.079 "superblock": true, 00:13:17.079 "num_base_bdevs": 2, 00:13:17.079 "num_base_bdevs_discovered": 1, 00:13:17.079 "num_base_bdevs_operational": 1, 00:13:17.079 "base_bdevs_list": [ 00:13:17.079 { 00:13:17.079 "name": null, 00:13:17.079 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:17.079 "is_configured": false, 00:13:17.079 "data_offset": 0, 00:13:17.079 "data_size": 63488 00:13:17.079 }, 00:13:17.079 { 00:13:17.079 "name": "BaseBdev2", 00:13:17.079 "uuid": "22a31d58-cfce-5ed2-a721-e11dea7e5cdf", 00:13:17.079 "is_configured": true, 00:13:17.079 "data_offset": 2048, 00:13:17.079 "data_size": 63488 00:13:17.079 } 00:13:17.079 ] 00:13:17.079 }' 00:13:17.079 15:21:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:17.079 15:21:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:17.079 15:21:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:17.079 15:21:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:17.079 15:21:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 76639 00:13:17.079 15:21:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # '[' -z 76639 ']' 00:13:17.079 15:21:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # kill -0 76639 00:13:17.079 15:21:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # uname 00:13:17.079 15:21:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:17.079 15:21:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76639 00:13:17.079 killing process with pid 76639 00:13:17.079 Received shutdown signal, test time was about 17.972067 seconds 00:13:17.079 00:13:17.079 Latency(us) 00:13:17.079 [2024-11-20T15:21:03.561Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:17.079 [2024-11-20T15:21:03.561Z] =================================================================================================================== 00:13:17.079 [2024-11-20T15:21:03.561Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:17.079 15:21:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:17.079 15:21:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:17.079 15:21:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76639' 00:13:17.079 15:21:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@973 -- # kill 76639 00:13:17.079 [2024-11-20 15:21:03.522066] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:17.079 15:21:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@978 -- # wait 76639 00:13:17.079 [2024-11-20 15:21:03.522203] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:17.079 [2024-11-20 15:21:03.522263] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:17.079 [2024-11-20 15:21:03.522274] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:13:17.338 [2024-11-20 15:21:03.762346] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:18.761 ************************************ 00:13:18.761 END TEST raid_rebuild_test_sb_io 00:13:18.761 ************************************ 00:13:18.761 15:21:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:13:18.761 00:13:18.761 real 0m21.191s 00:13:18.761 user 0m27.199s 00:13:18.761 sys 0m2.587s 00:13:18.761 15:21:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:18.761 15:21:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:18.761 15:21:05 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:13:18.761 15:21:05 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 4 false false true 00:13:18.761 15:21:05 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:13:18.761 15:21:05 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:18.761 15:21:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:18.761 ************************************ 00:13:18.761 START TEST raid_rebuild_test 00:13:18.761 ************************************ 00:13:18.761 15:21:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 false false true 00:13:18.761 15:21:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:13:18.761 15:21:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:13:18.761 15:21:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:13:18.761 15:21:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:13:18.761 15:21:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:18.761 15:21:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:18.761 15:21:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:18.761 15:21:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:18.761 15:21:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:18.761 15:21:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:18.761 15:21:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:18.761 15:21:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:18.761 15:21:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:18.761 15:21:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:13:18.761 15:21:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:18.761 15:21:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:18.761 15:21:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:13:18.761 15:21:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:18.761 15:21:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:18.761 15:21:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:18.761 15:21:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:18.761 15:21:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:18.761 15:21:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:18.761 15:21:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:18.761 15:21:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:18.761 15:21:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:18.761 15:21:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:13:18.761 15:21:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:13:18.761 15:21:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:13:18.761 15:21:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=77349 00:13:18.761 15:21:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 77349 00:13:18.761 15:21:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:18.761 15:21:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 77349 ']' 00:13:18.761 15:21:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:18.761 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:18.761 15:21:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:18.761 15:21:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:18.761 15:21:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:18.761 15:21:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.761 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:18.761 Zero copy mechanism will not be used. 00:13:18.761 [2024-11-20 15:21:05.202516] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:13:18.761 [2024-11-20 15:21:05.202649] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77349 ] 00:13:19.020 [2024-11-20 15:21:05.387364] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:19.279 [2024-11-20 15:21:05.512304] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:19.279 [2024-11-20 15:21:05.730500] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:19.279 [2024-11-20 15:21:05.730801] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:19.847 15:21:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:19.847 15:21:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:13:19.847 15:21:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:19.847 15:21:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:19.847 15:21:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.847 15:21:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.847 BaseBdev1_malloc 00:13:19.847 15:21:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.847 15:21:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:19.847 15:21:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.847 15:21:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.847 [2024-11-20 15:21:06.118982] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:19.847 [2024-11-20 15:21:06.119227] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:19.847 [2024-11-20 15:21:06.119270] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:19.847 [2024-11-20 15:21:06.119296] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:19.847 [2024-11-20 15:21:06.121995] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:19.847 [2024-11-20 15:21:06.122043] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:19.847 BaseBdev1 00:13:19.847 15:21:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.847 15:21:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:19.847 15:21:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:19.847 15:21:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.847 15:21:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.847 BaseBdev2_malloc 00:13:19.847 15:21:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.847 15:21:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:19.847 15:21:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.847 15:21:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.847 [2024-11-20 15:21:06.176380] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:19.847 [2024-11-20 15:21:06.176683] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:19.847 [2024-11-20 15:21:06.176831] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:19.847 [2024-11-20 15:21:06.176952] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:19.847 [2024-11-20 15:21:06.179746] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:19.847 [2024-11-20 15:21:06.179792] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:19.847 BaseBdev2 00:13:19.847 15:21:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.847 15:21:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:19.847 15:21:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:19.847 15:21:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.847 15:21:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.847 BaseBdev3_malloc 00:13:19.847 15:21:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.847 15:21:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:13:19.847 15:21:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.847 15:21:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.847 [2024-11-20 15:21:06.249426] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:13:19.847 [2024-11-20 15:21:06.249637] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:19.847 [2024-11-20 15:21:06.249680] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:19.847 [2024-11-20 15:21:06.249696] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:19.847 [2024-11-20 15:21:06.252455] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:19.847 [2024-11-20 15:21:06.252622] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:19.847 BaseBdev3 00:13:19.847 15:21:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.847 15:21:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:19.847 15:21:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:13:19.847 15:21:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.847 15:21:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.847 BaseBdev4_malloc 00:13:19.847 15:21:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.847 15:21:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:13:19.847 15:21:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.847 15:21:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.847 [2024-11-20 15:21:06.306665] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:13:19.847 [2024-11-20 15:21:06.306971] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:19.847 [2024-11-20 15:21:06.307132] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:13:19.847 [2024-11-20 15:21:06.307304] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:19.847 [2024-11-20 15:21:06.310306] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:19.847 [2024-11-20 15:21:06.310495] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:13:19.847 BaseBdev4 00:13:19.847 15:21:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.847 15:21:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:19.847 15:21:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.847 15:21:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.106 spare_malloc 00:13:20.106 15:21:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.106 15:21:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:20.106 15:21:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.106 15:21:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.106 spare_delay 00:13:20.106 15:21:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.106 15:21:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:20.106 15:21:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.106 15:21:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.106 [2024-11-20 15:21:06.376452] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:20.106 [2024-11-20 15:21:06.376699] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:20.106 [2024-11-20 15:21:06.376737] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:13:20.106 [2024-11-20 15:21:06.376755] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:20.106 [2024-11-20 15:21:06.379473] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:20.106 [2024-11-20 15:21:06.379649] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:20.106 spare 00:13:20.106 15:21:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.106 15:21:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:13:20.106 15:21:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.106 15:21:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.106 [2024-11-20 15:21:06.388567] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:20.106 [2024-11-20 15:21:06.390966] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:20.106 [2024-11-20 15:21:06.391172] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:20.106 [2024-11-20 15:21:06.391364] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:20.106 [2024-11-20 15:21:06.391502] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:20.106 [2024-11-20 15:21:06.391524] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:13:20.106 [2024-11-20 15:21:06.391855] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:13:20.106 [2024-11-20 15:21:06.392044] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:20.106 [2024-11-20 15:21:06.392059] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:20.106 [2024-11-20 15:21:06.392244] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:20.106 15:21:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.106 15:21:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:13:20.106 15:21:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:20.106 15:21:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:20.106 15:21:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:20.106 15:21:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:20.106 15:21:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:20.106 15:21:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:20.106 15:21:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:20.106 15:21:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:20.106 15:21:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:20.106 15:21:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:20.106 15:21:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:20.106 15:21:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.106 15:21:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.106 15:21:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.106 15:21:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:20.106 "name": "raid_bdev1", 00:13:20.106 "uuid": "37a9bd4c-9c2b-48f8-892f-a7617b6f01e6", 00:13:20.106 "strip_size_kb": 0, 00:13:20.106 "state": "online", 00:13:20.106 "raid_level": "raid1", 00:13:20.106 "superblock": false, 00:13:20.106 "num_base_bdevs": 4, 00:13:20.106 "num_base_bdevs_discovered": 4, 00:13:20.106 "num_base_bdevs_operational": 4, 00:13:20.106 "base_bdevs_list": [ 00:13:20.106 { 00:13:20.106 "name": "BaseBdev1", 00:13:20.106 "uuid": "da2877a2-cf89-5534-a98b-5e07a2aece79", 00:13:20.106 "is_configured": true, 00:13:20.106 "data_offset": 0, 00:13:20.106 "data_size": 65536 00:13:20.106 }, 00:13:20.106 { 00:13:20.106 "name": "BaseBdev2", 00:13:20.106 "uuid": "5ae708f4-a377-5026-8951-6cac9adf83d4", 00:13:20.106 "is_configured": true, 00:13:20.106 "data_offset": 0, 00:13:20.106 "data_size": 65536 00:13:20.106 }, 00:13:20.106 { 00:13:20.106 "name": "BaseBdev3", 00:13:20.106 "uuid": "0894107a-d7b6-5124-81ea-e5845da16573", 00:13:20.106 "is_configured": true, 00:13:20.106 "data_offset": 0, 00:13:20.106 "data_size": 65536 00:13:20.106 }, 00:13:20.106 { 00:13:20.106 "name": "BaseBdev4", 00:13:20.106 "uuid": "1793e2e2-5e54-5f2e-ae04-f7400e6aa5e5", 00:13:20.106 "is_configured": true, 00:13:20.106 "data_offset": 0, 00:13:20.106 "data_size": 65536 00:13:20.106 } 00:13:20.106 ] 00:13:20.106 }' 00:13:20.106 15:21:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:20.106 15:21:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.365 15:21:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:20.365 15:21:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:20.365 15:21:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.365 15:21:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.365 [2024-11-20 15:21:06.836268] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:20.623 15:21:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.623 15:21:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:13:20.623 15:21:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:20.623 15:21:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:20.623 15:21:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.623 15:21:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.623 15:21:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.623 15:21:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:13:20.623 15:21:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:13:20.623 15:21:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:13:20.623 15:21:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:13:20.623 15:21:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:13:20.623 15:21:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:20.623 15:21:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:13:20.623 15:21:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:20.623 15:21:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:20.623 15:21:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:20.623 15:21:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:13:20.623 15:21:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:20.623 15:21:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:20.624 15:21:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:13:20.882 [2024-11-20 15:21:07.135570] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:13:20.882 /dev/nbd0 00:13:20.882 15:21:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:20.882 15:21:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:20.882 15:21:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:20.882 15:21:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:13:20.882 15:21:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:20.882 15:21:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:20.882 15:21:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:20.882 15:21:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:13:20.882 15:21:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:20.882 15:21:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:20.883 15:21:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:20.883 1+0 records in 00:13:20.883 1+0 records out 00:13:20.883 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000456968 s, 9.0 MB/s 00:13:20.883 15:21:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:20.883 15:21:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:13:20.883 15:21:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:20.883 15:21:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:20.883 15:21:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:13:20.883 15:21:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:20.883 15:21:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:20.883 15:21:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:13:20.883 15:21:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:13:20.883 15:21:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:13:27.445 65536+0 records in 00:13:27.445 65536+0 records out 00:13:27.446 33554432 bytes (34 MB, 32 MiB) copied, 6.48753 s, 5.2 MB/s 00:13:27.446 15:21:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:27.446 15:21:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:27.446 15:21:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:27.446 15:21:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:27.446 15:21:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:13:27.446 15:21:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:27.446 15:21:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:27.446 [2024-11-20 15:21:13.916711] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:27.705 15:21:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:27.705 15:21:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:27.705 15:21:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:27.705 15:21:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:27.705 15:21:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:27.705 15:21:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:27.705 15:21:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:13:27.705 15:21:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:13:27.705 15:21:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:27.705 15:21:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.705 15:21:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.705 [2024-11-20 15:21:13.948774] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:27.705 15:21:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.705 15:21:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:27.705 15:21:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:27.705 15:21:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:27.705 15:21:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:27.705 15:21:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:27.705 15:21:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:27.705 15:21:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:27.705 15:21:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:27.705 15:21:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:27.705 15:21:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:27.705 15:21:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:27.705 15:21:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:27.705 15:21:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.705 15:21:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.705 15:21:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.705 15:21:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:27.705 "name": "raid_bdev1", 00:13:27.705 "uuid": "37a9bd4c-9c2b-48f8-892f-a7617b6f01e6", 00:13:27.705 "strip_size_kb": 0, 00:13:27.705 "state": "online", 00:13:27.705 "raid_level": "raid1", 00:13:27.705 "superblock": false, 00:13:27.705 "num_base_bdevs": 4, 00:13:27.705 "num_base_bdevs_discovered": 3, 00:13:27.705 "num_base_bdevs_operational": 3, 00:13:27.705 "base_bdevs_list": [ 00:13:27.705 { 00:13:27.705 "name": null, 00:13:27.705 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:27.705 "is_configured": false, 00:13:27.705 "data_offset": 0, 00:13:27.705 "data_size": 65536 00:13:27.705 }, 00:13:27.705 { 00:13:27.705 "name": "BaseBdev2", 00:13:27.705 "uuid": "5ae708f4-a377-5026-8951-6cac9adf83d4", 00:13:27.705 "is_configured": true, 00:13:27.705 "data_offset": 0, 00:13:27.705 "data_size": 65536 00:13:27.705 }, 00:13:27.705 { 00:13:27.705 "name": "BaseBdev3", 00:13:27.705 "uuid": "0894107a-d7b6-5124-81ea-e5845da16573", 00:13:27.705 "is_configured": true, 00:13:27.705 "data_offset": 0, 00:13:27.705 "data_size": 65536 00:13:27.705 }, 00:13:27.705 { 00:13:27.705 "name": "BaseBdev4", 00:13:27.705 "uuid": "1793e2e2-5e54-5f2e-ae04-f7400e6aa5e5", 00:13:27.705 "is_configured": true, 00:13:27.705 "data_offset": 0, 00:13:27.705 "data_size": 65536 00:13:27.705 } 00:13:27.705 ] 00:13:27.705 }' 00:13:27.705 15:21:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:27.705 15:21:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.964 15:21:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:27.964 15:21:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.964 15:21:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.964 [2024-11-20 15:21:14.404110] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:27.964 [2024-11-20 15:21:14.420261] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09d70 00:13:27.964 15:21:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.964 15:21:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:27.964 [2024-11-20 15:21:14.422588] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:28.955 15:21:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:28.955 15:21:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:28.955 15:21:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:28.955 15:21:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:28.955 15:21:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:28.955 15:21:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:28.955 15:21:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:28.955 15:21:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.955 15:21:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.214 15:21:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.214 15:21:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:29.214 "name": "raid_bdev1", 00:13:29.214 "uuid": "37a9bd4c-9c2b-48f8-892f-a7617b6f01e6", 00:13:29.214 "strip_size_kb": 0, 00:13:29.214 "state": "online", 00:13:29.214 "raid_level": "raid1", 00:13:29.214 "superblock": false, 00:13:29.214 "num_base_bdevs": 4, 00:13:29.214 "num_base_bdevs_discovered": 4, 00:13:29.214 "num_base_bdevs_operational": 4, 00:13:29.214 "process": { 00:13:29.214 "type": "rebuild", 00:13:29.214 "target": "spare", 00:13:29.214 "progress": { 00:13:29.214 "blocks": 20480, 00:13:29.214 "percent": 31 00:13:29.214 } 00:13:29.214 }, 00:13:29.214 "base_bdevs_list": [ 00:13:29.214 { 00:13:29.214 "name": "spare", 00:13:29.214 "uuid": "184a874f-fe85-5a7d-a50a-240e92a000d6", 00:13:29.214 "is_configured": true, 00:13:29.214 "data_offset": 0, 00:13:29.214 "data_size": 65536 00:13:29.214 }, 00:13:29.214 { 00:13:29.214 "name": "BaseBdev2", 00:13:29.214 "uuid": "5ae708f4-a377-5026-8951-6cac9adf83d4", 00:13:29.214 "is_configured": true, 00:13:29.214 "data_offset": 0, 00:13:29.214 "data_size": 65536 00:13:29.214 }, 00:13:29.214 { 00:13:29.214 "name": "BaseBdev3", 00:13:29.214 "uuid": "0894107a-d7b6-5124-81ea-e5845da16573", 00:13:29.214 "is_configured": true, 00:13:29.214 "data_offset": 0, 00:13:29.214 "data_size": 65536 00:13:29.214 }, 00:13:29.214 { 00:13:29.214 "name": "BaseBdev4", 00:13:29.214 "uuid": "1793e2e2-5e54-5f2e-ae04-f7400e6aa5e5", 00:13:29.214 "is_configured": true, 00:13:29.214 "data_offset": 0, 00:13:29.214 "data_size": 65536 00:13:29.214 } 00:13:29.214 ] 00:13:29.214 }' 00:13:29.214 15:21:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:29.214 15:21:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:29.214 15:21:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:29.214 15:21:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:29.214 15:21:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:29.214 15:21:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.214 15:21:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.214 [2024-11-20 15:21:15.561611] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:29.214 [2024-11-20 15:21:15.628386] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:29.214 [2024-11-20 15:21:15.628481] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:29.214 [2024-11-20 15:21:15.628501] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:29.214 [2024-11-20 15:21:15.628512] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:29.214 15:21:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.214 15:21:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:29.214 15:21:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:29.214 15:21:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:29.214 15:21:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:29.214 15:21:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:29.214 15:21:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:29.214 15:21:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:29.214 15:21:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:29.214 15:21:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:29.214 15:21:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:29.214 15:21:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:29.214 15:21:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:29.214 15:21:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.215 15:21:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.215 15:21:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.473 15:21:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:29.473 "name": "raid_bdev1", 00:13:29.474 "uuid": "37a9bd4c-9c2b-48f8-892f-a7617b6f01e6", 00:13:29.474 "strip_size_kb": 0, 00:13:29.474 "state": "online", 00:13:29.474 "raid_level": "raid1", 00:13:29.474 "superblock": false, 00:13:29.474 "num_base_bdevs": 4, 00:13:29.474 "num_base_bdevs_discovered": 3, 00:13:29.474 "num_base_bdevs_operational": 3, 00:13:29.474 "base_bdevs_list": [ 00:13:29.474 { 00:13:29.474 "name": null, 00:13:29.474 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:29.474 "is_configured": false, 00:13:29.474 "data_offset": 0, 00:13:29.474 "data_size": 65536 00:13:29.474 }, 00:13:29.474 { 00:13:29.474 "name": "BaseBdev2", 00:13:29.474 "uuid": "5ae708f4-a377-5026-8951-6cac9adf83d4", 00:13:29.474 "is_configured": true, 00:13:29.474 "data_offset": 0, 00:13:29.474 "data_size": 65536 00:13:29.474 }, 00:13:29.474 { 00:13:29.474 "name": "BaseBdev3", 00:13:29.474 "uuid": "0894107a-d7b6-5124-81ea-e5845da16573", 00:13:29.474 "is_configured": true, 00:13:29.474 "data_offset": 0, 00:13:29.474 "data_size": 65536 00:13:29.474 }, 00:13:29.474 { 00:13:29.474 "name": "BaseBdev4", 00:13:29.474 "uuid": "1793e2e2-5e54-5f2e-ae04-f7400e6aa5e5", 00:13:29.474 "is_configured": true, 00:13:29.474 "data_offset": 0, 00:13:29.474 "data_size": 65536 00:13:29.474 } 00:13:29.474 ] 00:13:29.474 }' 00:13:29.474 15:21:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:29.474 15:21:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.733 15:21:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:29.733 15:21:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:29.733 15:21:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:29.733 15:21:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:29.733 15:21:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:29.733 15:21:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:29.733 15:21:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:29.733 15:21:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.733 15:21:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.733 15:21:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.733 15:21:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:29.733 "name": "raid_bdev1", 00:13:29.733 "uuid": "37a9bd4c-9c2b-48f8-892f-a7617b6f01e6", 00:13:29.733 "strip_size_kb": 0, 00:13:29.733 "state": "online", 00:13:29.733 "raid_level": "raid1", 00:13:29.733 "superblock": false, 00:13:29.733 "num_base_bdevs": 4, 00:13:29.733 "num_base_bdevs_discovered": 3, 00:13:29.733 "num_base_bdevs_operational": 3, 00:13:29.733 "base_bdevs_list": [ 00:13:29.733 { 00:13:29.733 "name": null, 00:13:29.733 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:29.733 "is_configured": false, 00:13:29.733 "data_offset": 0, 00:13:29.733 "data_size": 65536 00:13:29.734 }, 00:13:29.734 { 00:13:29.734 "name": "BaseBdev2", 00:13:29.734 "uuid": "5ae708f4-a377-5026-8951-6cac9adf83d4", 00:13:29.734 "is_configured": true, 00:13:29.734 "data_offset": 0, 00:13:29.734 "data_size": 65536 00:13:29.734 }, 00:13:29.734 { 00:13:29.734 "name": "BaseBdev3", 00:13:29.734 "uuid": "0894107a-d7b6-5124-81ea-e5845da16573", 00:13:29.734 "is_configured": true, 00:13:29.734 "data_offset": 0, 00:13:29.734 "data_size": 65536 00:13:29.734 }, 00:13:29.734 { 00:13:29.734 "name": "BaseBdev4", 00:13:29.734 "uuid": "1793e2e2-5e54-5f2e-ae04-f7400e6aa5e5", 00:13:29.734 "is_configured": true, 00:13:29.734 "data_offset": 0, 00:13:29.734 "data_size": 65536 00:13:29.734 } 00:13:29.734 ] 00:13:29.734 }' 00:13:29.734 15:21:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:29.734 15:21:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:29.993 15:21:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:29.993 15:21:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:29.993 15:21:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:29.993 15:21:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.993 15:21:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.993 [2024-11-20 15:21:16.258977] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:29.993 [2024-11-20 15:21:16.275184] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09e40 00:13:29.993 15:21:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.993 15:21:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:29.993 [2024-11-20 15:21:16.277726] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:30.930 15:21:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:30.930 15:21:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:30.930 15:21:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:30.930 15:21:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:30.930 15:21:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:30.930 15:21:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:30.931 15:21:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:30.931 15:21:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.931 15:21:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.931 15:21:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.931 15:21:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:30.931 "name": "raid_bdev1", 00:13:30.931 "uuid": "37a9bd4c-9c2b-48f8-892f-a7617b6f01e6", 00:13:30.931 "strip_size_kb": 0, 00:13:30.931 "state": "online", 00:13:30.931 "raid_level": "raid1", 00:13:30.931 "superblock": false, 00:13:30.931 "num_base_bdevs": 4, 00:13:30.931 "num_base_bdevs_discovered": 4, 00:13:30.931 "num_base_bdevs_operational": 4, 00:13:30.931 "process": { 00:13:30.931 "type": "rebuild", 00:13:30.931 "target": "spare", 00:13:30.931 "progress": { 00:13:30.931 "blocks": 20480, 00:13:30.931 "percent": 31 00:13:30.931 } 00:13:30.931 }, 00:13:30.931 "base_bdevs_list": [ 00:13:30.931 { 00:13:30.931 "name": "spare", 00:13:30.931 "uuid": "184a874f-fe85-5a7d-a50a-240e92a000d6", 00:13:30.931 "is_configured": true, 00:13:30.931 "data_offset": 0, 00:13:30.931 "data_size": 65536 00:13:30.931 }, 00:13:30.931 { 00:13:30.931 "name": "BaseBdev2", 00:13:30.931 "uuid": "5ae708f4-a377-5026-8951-6cac9adf83d4", 00:13:30.931 "is_configured": true, 00:13:30.931 "data_offset": 0, 00:13:30.931 "data_size": 65536 00:13:30.931 }, 00:13:30.931 { 00:13:30.931 "name": "BaseBdev3", 00:13:30.931 "uuid": "0894107a-d7b6-5124-81ea-e5845da16573", 00:13:30.931 "is_configured": true, 00:13:30.931 "data_offset": 0, 00:13:30.931 "data_size": 65536 00:13:30.931 }, 00:13:30.931 { 00:13:30.931 "name": "BaseBdev4", 00:13:30.931 "uuid": "1793e2e2-5e54-5f2e-ae04-f7400e6aa5e5", 00:13:30.931 "is_configured": true, 00:13:30.931 "data_offset": 0, 00:13:30.931 "data_size": 65536 00:13:30.931 } 00:13:30.931 ] 00:13:30.931 }' 00:13:30.931 15:21:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:30.931 15:21:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:30.931 15:21:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:31.190 15:21:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:31.190 15:21:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:13:31.190 15:21:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:13:31.190 15:21:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:13:31.190 15:21:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:13:31.190 15:21:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:31.190 15:21:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.190 15:21:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:31.190 [2024-11-20 15:21:17.420732] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:31.190 [2024-11-20 15:21:17.483460] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000d09e40 00:13:31.190 15:21:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.190 15:21:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:13:31.190 15:21:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:13:31.190 15:21:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:31.190 15:21:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:31.190 15:21:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:31.190 15:21:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:31.190 15:21:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:31.190 15:21:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:31.190 15:21:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:31.190 15:21:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.190 15:21:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:31.190 15:21:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.190 15:21:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:31.190 "name": "raid_bdev1", 00:13:31.190 "uuid": "37a9bd4c-9c2b-48f8-892f-a7617b6f01e6", 00:13:31.190 "strip_size_kb": 0, 00:13:31.190 "state": "online", 00:13:31.190 "raid_level": "raid1", 00:13:31.190 "superblock": false, 00:13:31.190 "num_base_bdevs": 4, 00:13:31.190 "num_base_bdevs_discovered": 3, 00:13:31.190 "num_base_bdevs_operational": 3, 00:13:31.190 "process": { 00:13:31.190 "type": "rebuild", 00:13:31.190 "target": "spare", 00:13:31.190 "progress": { 00:13:31.190 "blocks": 24576, 00:13:31.190 "percent": 37 00:13:31.190 } 00:13:31.190 }, 00:13:31.190 "base_bdevs_list": [ 00:13:31.190 { 00:13:31.190 "name": "spare", 00:13:31.190 "uuid": "184a874f-fe85-5a7d-a50a-240e92a000d6", 00:13:31.190 "is_configured": true, 00:13:31.190 "data_offset": 0, 00:13:31.190 "data_size": 65536 00:13:31.190 }, 00:13:31.190 { 00:13:31.190 "name": null, 00:13:31.190 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:31.190 "is_configured": false, 00:13:31.190 "data_offset": 0, 00:13:31.190 "data_size": 65536 00:13:31.190 }, 00:13:31.190 { 00:13:31.190 "name": "BaseBdev3", 00:13:31.190 "uuid": "0894107a-d7b6-5124-81ea-e5845da16573", 00:13:31.190 "is_configured": true, 00:13:31.190 "data_offset": 0, 00:13:31.190 "data_size": 65536 00:13:31.190 }, 00:13:31.190 { 00:13:31.190 "name": "BaseBdev4", 00:13:31.190 "uuid": "1793e2e2-5e54-5f2e-ae04-f7400e6aa5e5", 00:13:31.190 "is_configured": true, 00:13:31.190 "data_offset": 0, 00:13:31.190 "data_size": 65536 00:13:31.190 } 00:13:31.190 ] 00:13:31.190 }' 00:13:31.190 15:21:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:31.190 15:21:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:31.190 15:21:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:31.190 15:21:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:31.190 15:21:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=441 00:13:31.190 15:21:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:31.190 15:21:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:31.190 15:21:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:31.190 15:21:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:31.190 15:21:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:31.190 15:21:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:31.190 15:21:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:31.190 15:21:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:31.190 15:21:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.191 15:21:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:31.191 15:21:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.450 15:21:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:31.450 "name": "raid_bdev1", 00:13:31.450 "uuid": "37a9bd4c-9c2b-48f8-892f-a7617b6f01e6", 00:13:31.450 "strip_size_kb": 0, 00:13:31.450 "state": "online", 00:13:31.450 "raid_level": "raid1", 00:13:31.450 "superblock": false, 00:13:31.450 "num_base_bdevs": 4, 00:13:31.450 "num_base_bdevs_discovered": 3, 00:13:31.450 "num_base_bdevs_operational": 3, 00:13:31.450 "process": { 00:13:31.450 "type": "rebuild", 00:13:31.450 "target": "spare", 00:13:31.450 "progress": { 00:13:31.450 "blocks": 26624, 00:13:31.450 "percent": 40 00:13:31.450 } 00:13:31.450 }, 00:13:31.450 "base_bdevs_list": [ 00:13:31.450 { 00:13:31.450 "name": "spare", 00:13:31.450 "uuid": "184a874f-fe85-5a7d-a50a-240e92a000d6", 00:13:31.450 "is_configured": true, 00:13:31.450 "data_offset": 0, 00:13:31.450 "data_size": 65536 00:13:31.450 }, 00:13:31.450 { 00:13:31.450 "name": null, 00:13:31.450 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:31.450 "is_configured": false, 00:13:31.450 "data_offset": 0, 00:13:31.450 "data_size": 65536 00:13:31.450 }, 00:13:31.450 { 00:13:31.450 "name": "BaseBdev3", 00:13:31.450 "uuid": "0894107a-d7b6-5124-81ea-e5845da16573", 00:13:31.450 "is_configured": true, 00:13:31.450 "data_offset": 0, 00:13:31.450 "data_size": 65536 00:13:31.450 }, 00:13:31.450 { 00:13:31.450 "name": "BaseBdev4", 00:13:31.450 "uuid": "1793e2e2-5e54-5f2e-ae04-f7400e6aa5e5", 00:13:31.450 "is_configured": true, 00:13:31.450 "data_offset": 0, 00:13:31.450 "data_size": 65536 00:13:31.450 } 00:13:31.450 ] 00:13:31.450 }' 00:13:31.450 15:21:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:31.450 15:21:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:31.450 15:21:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:31.450 15:21:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:31.450 15:21:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:32.387 15:21:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:32.387 15:21:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:32.387 15:21:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:32.387 15:21:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:32.387 15:21:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:32.387 15:21:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:32.387 15:21:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:32.387 15:21:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.387 15:21:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:32.387 15:21:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:32.387 15:21:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.387 15:21:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:32.387 "name": "raid_bdev1", 00:13:32.387 "uuid": "37a9bd4c-9c2b-48f8-892f-a7617b6f01e6", 00:13:32.387 "strip_size_kb": 0, 00:13:32.387 "state": "online", 00:13:32.387 "raid_level": "raid1", 00:13:32.387 "superblock": false, 00:13:32.387 "num_base_bdevs": 4, 00:13:32.387 "num_base_bdevs_discovered": 3, 00:13:32.387 "num_base_bdevs_operational": 3, 00:13:32.387 "process": { 00:13:32.387 "type": "rebuild", 00:13:32.387 "target": "spare", 00:13:32.387 "progress": { 00:13:32.387 "blocks": 49152, 00:13:32.387 "percent": 75 00:13:32.388 } 00:13:32.388 }, 00:13:32.388 "base_bdevs_list": [ 00:13:32.388 { 00:13:32.388 "name": "spare", 00:13:32.388 "uuid": "184a874f-fe85-5a7d-a50a-240e92a000d6", 00:13:32.388 "is_configured": true, 00:13:32.388 "data_offset": 0, 00:13:32.388 "data_size": 65536 00:13:32.388 }, 00:13:32.388 { 00:13:32.388 "name": null, 00:13:32.388 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:32.388 "is_configured": false, 00:13:32.388 "data_offset": 0, 00:13:32.388 "data_size": 65536 00:13:32.388 }, 00:13:32.388 { 00:13:32.388 "name": "BaseBdev3", 00:13:32.388 "uuid": "0894107a-d7b6-5124-81ea-e5845da16573", 00:13:32.388 "is_configured": true, 00:13:32.388 "data_offset": 0, 00:13:32.388 "data_size": 65536 00:13:32.388 }, 00:13:32.388 { 00:13:32.388 "name": "BaseBdev4", 00:13:32.388 "uuid": "1793e2e2-5e54-5f2e-ae04-f7400e6aa5e5", 00:13:32.388 "is_configured": true, 00:13:32.388 "data_offset": 0, 00:13:32.388 "data_size": 65536 00:13:32.388 } 00:13:32.388 ] 00:13:32.388 }' 00:13:32.388 15:21:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:32.388 15:21:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:32.388 15:21:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:32.647 15:21:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:32.647 15:21:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:33.214 [2024-11-20 15:21:19.493082] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:33.214 [2024-11-20 15:21:19.493170] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:33.214 [2024-11-20 15:21:19.493234] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:33.473 15:21:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:33.473 15:21:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:33.473 15:21:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:33.473 15:21:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:33.473 15:21:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:33.473 15:21:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:33.473 15:21:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:33.473 15:21:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:33.473 15:21:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.473 15:21:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:33.473 15:21:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.733 15:21:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:33.733 "name": "raid_bdev1", 00:13:33.733 "uuid": "37a9bd4c-9c2b-48f8-892f-a7617b6f01e6", 00:13:33.733 "strip_size_kb": 0, 00:13:33.733 "state": "online", 00:13:33.733 "raid_level": "raid1", 00:13:33.733 "superblock": false, 00:13:33.733 "num_base_bdevs": 4, 00:13:33.733 "num_base_bdevs_discovered": 3, 00:13:33.733 "num_base_bdevs_operational": 3, 00:13:33.733 "base_bdevs_list": [ 00:13:33.733 { 00:13:33.733 "name": "spare", 00:13:33.733 "uuid": "184a874f-fe85-5a7d-a50a-240e92a000d6", 00:13:33.733 "is_configured": true, 00:13:33.733 "data_offset": 0, 00:13:33.733 "data_size": 65536 00:13:33.733 }, 00:13:33.733 { 00:13:33.733 "name": null, 00:13:33.733 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:33.733 "is_configured": false, 00:13:33.733 "data_offset": 0, 00:13:33.733 "data_size": 65536 00:13:33.733 }, 00:13:33.733 { 00:13:33.733 "name": "BaseBdev3", 00:13:33.733 "uuid": "0894107a-d7b6-5124-81ea-e5845da16573", 00:13:33.733 "is_configured": true, 00:13:33.733 "data_offset": 0, 00:13:33.733 "data_size": 65536 00:13:33.733 }, 00:13:33.733 { 00:13:33.733 "name": "BaseBdev4", 00:13:33.733 "uuid": "1793e2e2-5e54-5f2e-ae04-f7400e6aa5e5", 00:13:33.733 "is_configured": true, 00:13:33.733 "data_offset": 0, 00:13:33.733 "data_size": 65536 00:13:33.733 } 00:13:33.733 ] 00:13:33.733 }' 00:13:33.733 15:21:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:33.733 15:21:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:33.733 15:21:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:33.733 15:21:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:33.733 15:21:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:13:33.733 15:21:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:33.733 15:21:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:33.733 15:21:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:33.733 15:21:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:33.733 15:21:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:33.733 15:21:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:33.733 15:21:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:33.733 15:21:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.733 15:21:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:33.733 15:21:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.733 15:21:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:33.733 "name": "raid_bdev1", 00:13:33.733 "uuid": "37a9bd4c-9c2b-48f8-892f-a7617b6f01e6", 00:13:33.733 "strip_size_kb": 0, 00:13:33.733 "state": "online", 00:13:33.733 "raid_level": "raid1", 00:13:33.733 "superblock": false, 00:13:33.733 "num_base_bdevs": 4, 00:13:33.733 "num_base_bdevs_discovered": 3, 00:13:33.733 "num_base_bdevs_operational": 3, 00:13:33.733 "base_bdevs_list": [ 00:13:33.733 { 00:13:33.733 "name": "spare", 00:13:33.733 "uuid": "184a874f-fe85-5a7d-a50a-240e92a000d6", 00:13:33.733 "is_configured": true, 00:13:33.733 "data_offset": 0, 00:13:33.733 "data_size": 65536 00:13:33.733 }, 00:13:33.733 { 00:13:33.733 "name": null, 00:13:33.733 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:33.733 "is_configured": false, 00:13:33.733 "data_offset": 0, 00:13:33.733 "data_size": 65536 00:13:33.733 }, 00:13:33.733 { 00:13:33.733 "name": "BaseBdev3", 00:13:33.733 "uuid": "0894107a-d7b6-5124-81ea-e5845da16573", 00:13:33.733 "is_configured": true, 00:13:33.733 "data_offset": 0, 00:13:33.733 "data_size": 65536 00:13:33.733 }, 00:13:33.733 { 00:13:33.733 "name": "BaseBdev4", 00:13:33.733 "uuid": "1793e2e2-5e54-5f2e-ae04-f7400e6aa5e5", 00:13:33.733 "is_configured": true, 00:13:33.733 "data_offset": 0, 00:13:33.733 "data_size": 65536 00:13:33.733 } 00:13:33.733 ] 00:13:33.733 }' 00:13:33.733 15:21:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:33.733 15:21:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:33.733 15:21:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:33.733 15:21:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:33.733 15:21:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:33.733 15:21:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:33.733 15:21:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:33.733 15:21:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:33.733 15:21:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:33.733 15:21:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:33.733 15:21:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:33.733 15:21:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:33.733 15:21:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:33.733 15:21:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:33.733 15:21:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:33.733 15:21:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:33.733 15:21:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.733 15:21:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:33.992 15:21:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.992 15:21:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:33.992 "name": "raid_bdev1", 00:13:33.992 "uuid": "37a9bd4c-9c2b-48f8-892f-a7617b6f01e6", 00:13:33.992 "strip_size_kb": 0, 00:13:33.992 "state": "online", 00:13:33.992 "raid_level": "raid1", 00:13:33.992 "superblock": false, 00:13:33.992 "num_base_bdevs": 4, 00:13:33.992 "num_base_bdevs_discovered": 3, 00:13:33.992 "num_base_bdevs_operational": 3, 00:13:33.992 "base_bdevs_list": [ 00:13:33.992 { 00:13:33.992 "name": "spare", 00:13:33.992 "uuid": "184a874f-fe85-5a7d-a50a-240e92a000d6", 00:13:33.992 "is_configured": true, 00:13:33.992 "data_offset": 0, 00:13:33.992 "data_size": 65536 00:13:33.992 }, 00:13:33.992 { 00:13:33.993 "name": null, 00:13:33.993 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:33.993 "is_configured": false, 00:13:33.993 "data_offset": 0, 00:13:33.993 "data_size": 65536 00:13:33.993 }, 00:13:33.993 { 00:13:33.993 "name": "BaseBdev3", 00:13:33.993 "uuid": "0894107a-d7b6-5124-81ea-e5845da16573", 00:13:33.993 "is_configured": true, 00:13:33.993 "data_offset": 0, 00:13:33.993 "data_size": 65536 00:13:33.993 }, 00:13:33.993 { 00:13:33.993 "name": "BaseBdev4", 00:13:33.993 "uuid": "1793e2e2-5e54-5f2e-ae04-f7400e6aa5e5", 00:13:33.993 "is_configured": true, 00:13:33.993 "data_offset": 0, 00:13:33.993 "data_size": 65536 00:13:33.993 } 00:13:33.993 ] 00:13:33.993 }' 00:13:33.993 15:21:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:33.993 15:21:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.252 15:21:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:34.252 15:21:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.252 15:21:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.252 [2024-11-20 15:21:20.642831] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:34.252 [2024-11-20 15:21:20.642870] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:34.252 [2024-11-20 15:21:20.642953] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:34.252 [2024-11-20 15:21:20.643038] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:34.252 [2024-11-20 15:21:20.643051] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:34.252 15:21:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.252 15:21:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:34.252 15:21:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.252 15:21:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.252 15:21:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:13:34.252 15:21:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.252 15:21:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:34.252 15:21:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:34.252 15:21:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:13:34.252 15:21:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:13:34.252 15:21:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:34.252 15:21:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:13:34.252 15:21:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:34.252 15:21:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:34.252 15:21:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:34.252 15:21:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:13:34.252 15:21:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:34.252 15:21:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:34.252 15:21:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:13:34.511 /dev/nbd0 00:13:34.511 15:21:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:34.511 15:21:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:34.511 15:21:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:34.511 15:21:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:13:34.511 15:21:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:34.511 15:21:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:34.511 15:21:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:34.511 15:21:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:13:34.511 15:21:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:34.511 15:21:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:34.511 15:21:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:34.511 1+0 records in 00:13:34.511 1+0 records out 00:13:34.511 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00042602 s, 9.6 MB/s 00:13:34.511 15:21:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:34.511 15:21:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:13:34.511 15:21:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:34.511 15:21:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:34.511 15:21:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:13:34.511 15:21:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:34.511 15:21:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:34.511 15:21:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:13:34.770 /dev/nbd1 00:13:34.770 15:21:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:34.770 15:21:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:34.770 15:21:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:13:34.770 15:21:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:13:34.770 15:21:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:34.770 15:21:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:34.770 15:21:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:13:34.770 15:21:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:13:34.770 15:21:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:34.770 15:21:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:34.770 15:21:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:34.770 1+0 records in 00:13:34.770 1+0 records out 00:13:34.770 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000348972 s, 11.7 MB/s 00:13:34.770 15:21:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:34.771 15:21:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:13:34.771 15:21:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:34.771 15:21:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:34.771 15:21:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:13:34.771 15:21:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:34.771 15:21:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:34.771 15:21:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:13:35.030 15:21:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:13:35.030 15:21:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:35.030 15:21:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:35.030 15:21:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:35.030 15:21:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:13:35.030 15:21:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:35.030 15:21:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:35.289 15:21:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:35.289 15:21:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:35.289 15:21:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:35.289 15:21:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:35.289 15:21:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:35.289 15:21:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:35.290 15:21:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:13:35.290 15:21:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:13:35.290 15:21:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:35.290 15:21:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:35.548 15:21:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:35.548 15:21:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:35.548 15:21:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:35.548 15:21:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:35.548 15:21:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:35.548 15:21:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:35.548 15:21:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:13:35.548 15:21:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:13:35.548 15:21:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:13:35.548 15:21:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 77349 00:13:35.548 15:21:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 77349 ']' 00:13:35.549 15:21:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 77349 00:13:35.549 15:21:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:13:35.549 15:21:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:35.549 15:21:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77349 00:13:35.549 15:21:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:35.549 killing process with pid 77349 00:13:35.549 Received shutdown signal, test time was about 60.000000 seconds 00:13:35.549 00:13:35.549 Latency(us) 00:13:35.549 [2024-11-20T15:21:22.031Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:35.549 [2024-11-20T15:21:22.031Z] =================================================================================================================== 00:13:35.549 [2024-11-20T15:21:22.031Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:35.549 15:21:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:35.549 15:21:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77349' 00:13:35.549 15:21:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@973 -- # kill 77349 00:13:35.549 [2024-11-20 15:21:21.951011] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:35.549 15:21:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@978 -- # wait 77349 00:13:36.116 [2024-11-20 15:21:22.450643] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:37.492 15:21:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:13:37.492 00:13:37.492 real 0m18.512s 00:13:37.492 user 0m20.158s 00:13:37.492 sys 0m3.924s 00:13:37.492 15:21:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:37.492 ************************************ 00:13:37.492 END TEST raid_rebuild_test 00:13:37.492 ************************************ 00:13:37.492 15:21:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.492 15:21:23 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 4 true false true 00:13:37.492 15:21:23 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:13:37.492 15:21:23 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:37.492 15:21:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:37.492 ************************************ 00:13:37.492 START TEST raid_rebuild_test_sb 00:13:37.492 ************************************ 00:13:37.492 15:21:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 true false true 00:13:37.492 15:21:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:13:37.492 15:21:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:13:37.492 15:21:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:13:37.492 15:21:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:13:37.492 15:21:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:37.492 15:21:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:37.492 15:21:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:37.492 15:21:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:37.492 15:21:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:37.492 15:21:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:37.492 15:21:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:37.492 15:21:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:37.492 15:21:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:37.492 15:21:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:13:37.492 15:21:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:37.492 15:21:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:37.492 15:21:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:13:37.492 15:21:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:37.492 15:21:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:37.492 15:21:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:37.492 15:21:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:37.492 15:21:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:37.492 15:21:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:37.492 15:21:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:37.492 15:21:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:37.492 15:21:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:37.492 15:21:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:13:37.492 15:21:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:13:37.492 15:21:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:13:37.492 15:21:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:13:37.492 15:21:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=77801 00:13:37.492 15:21:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:37.492 15:21:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 77801 00:13:37.492 15:21:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 77801 ']' 00:13:37.492 15:21:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:37.492 15:21:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:37.492 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:37.492 15:21:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:37.492 15:21:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:37.492 15:21:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:37.492 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:37.492 Zero copy mechanism will not be used. 00:13:37.492 [2024-11-20 15:21:23.783151] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:13:37.492 [2024-11-20 15:21:23.783284] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77801 ] 00:13:37.492 [2024-11-20 15:21:23.964028] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:37.810 [2024-11-20 15:21:24.085858] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:38.070 [2024-11-20 15:21:24.300038] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:38.070 [2024-11-20 15:21:24.300109] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:38.330 15:21:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:38.330 15:21:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:13:38.330 15:21:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:38.330 15:21:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:38.330 15:21:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.330 15:21:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:38.330 BaseBdev1_malloc 00:13:38.330 15:21:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.330 15:21:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:38.330 15:21:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.330 15:21:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:38.330 [2024-11-20 15:21:24.683822] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:38.330 [2024-11-20 15:21:24.684088] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:38.330 [2024-11-20 15:21:24.684151] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:38.330 [2024-11-20 15:21:24.684254] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:38.330 [2024-11-20 15:21:24.686834] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:38.330 BaseBdev1 00:13:38.330 [2024-11-20 15:21:24.687004] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:38.330 15:21:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.330 15:21:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:38.330 15:21:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:38.330 15:21:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.330 15:21:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:38.330 BaseBdev2_malloc 00:13:38.330 15:21:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.330 15:21:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:38.330 15:21:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.330 15:21:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:38.330 [2024-11-20 15:21:24.741003] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:38.330 [2024-11-20 15:21:24.741224] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:38.330 [2024-11-20 15:21:24.741288] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:38.330 [2024-11-20 15:21:24.741371] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:38.330 [2024-11-20 15:21:24.744033] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:38.330 [2024-11-20 15:21:24.744185] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:38.330 BaseBdev2 00:13:38.330 15:21:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.330 15:21:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:38.330 15:21:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:38.330 15:21:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.330 15:21:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:38.330 BaseBdev3_malloc 00:13:38.330 15:21:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.330 15:21:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:13:38.330 15:21:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.330 15:21:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:38.591 [2024-11-20 15:21:24.812435] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:13:38.591 [2024-11-20 15:21:24.812642] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:38.591 [2024-11-20 15:21:24.812715] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:38.591 [2024-11-20 15:21:24.812834] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:38.591 [2024-11-20 15:21:24.815262] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:38.591 [2024-11-20 15:21:24.815431] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:38.591 BaseBdev3 00:13:38.591 15:21:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.591 15:21:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:38.591 15:21:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:13:38.591 15:21:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.591 15:21:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:38.591 BaseBdev4_malloc 00:13:38.591 15:21:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.591 15:21:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:13:38.591 15:21:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.591 15:21:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:38.591 [2024-11-20 15:21:24.867909] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:13:38.591 [2024-11-20 15:21:24.868121] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:38.591 [2024-11-20 15:21:24.868180] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:13:38.591 [2024-11-20 15:21:24.868292] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:38.591 [2024-11-20 15:21:24.870785] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:38.591 BaseBdev4 00:13:38.591 [2024-11-20 15:21:24.870943] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:13:38.591 15:21:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.591 15:21:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:38.591 15:21:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.591 15:21:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:38.591 spare_malloc 00:13:38.591 15:21:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.591 15:21:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:38.591 15:21:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.591 15:21:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:38.591 spare_delay 00:13:38.591 15:21:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.591 15:21:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:38.591 15:21:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.591 15:21:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:38.591 [2024-11-20 15:21:24.938205] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:38.591 [2024-11-20 15:21:24.938415] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:38.591 [2024-11-20 15:21:24.938475] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:13:38.591 [2024-11-20 15:21:24.938558] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:38.591 [2024-11-20 15:21:24.941127] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:38.591 [2024-11-20 15:21:24.941271] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:38.591 spare 00:13:38.591 15:21:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.591 15:21:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:13:38.591 15:21:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.591 15:21:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:38.591 [2024-11-20 15:21:24.950274] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:38.591 [2024-11-20 15:21:24.952598] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:38.591 [2024-11-20 15:21:24.952804] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:38.591 [2024-11-20 15:21:24.952873] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:38.591 [2024-11-20 15:21:24.953079] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:38.591 [2024-11-20 15:21:24.953099] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:38.591 [2024-11-20 15:21:24.953402] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:13:38.591 [2024-11-20 15:21:24.953592] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:38.591 [2024-11-20 15:21:24.953604] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:38.591 [2024-11-20 15:21:24.953839] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:38.591 15:21:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.591 15:21:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:13:38.592 15:21:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:38.592 15:21:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:38.592 15:21:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:38.592 15:21:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:38.592 15:21:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:38.592 15:21:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:38.592 15:21:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:38.592 15:21:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:38.592 15:21:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:38.592 15:21:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:38.592 15:21:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:38.592 15:21:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.592 15:21:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:38.592 15:21:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.592 15:21:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:38.592 "name": "raid_bdev1", 00:13:38.592 "uuid": "774beb96-93ed-4e30-92e0-e13de3c54bfd", 00:13:38.592 "strip_size_kb": 0, 00:13:38.592 "state": "online", 00:13:38.592 "raid_level": "raid1", 00:13:38.592 "superblock": true, 00:13:38.592 "num_base_bdevs": 4, 00:13:38.592 "num_base_bdevs_discovered": 4, 00:13:38.592 "num_base_bdevs_operational": 4, 00:13:38.592 "base_bdevs_list": [ 00:13:38.592 { 00:13:38.592 "name": "BaseBdev1", 00:13:38.592 "uuid": "60745772-a326-50c0-a7c9-53941b8e209b", 00:13:38.592 "is_configured": true, 00:13:38.592 "data_offset": 2048, 00:13:38.592 "data_size": 63488 00:13:38.592 }, 00:13:38.592 { 00:13:38.592 "name": "BaseBdev2", 00:13:38.592 "uuid": "e1418be3-455c-5e7e-a7b8-69fb714dcbcc", 00:13:38.592 "is_configured": true, 00:13:38.592 "data_offset": 2048, 00:13:38.592 "data_size": 63488 00:13:38.592 }, 00:13:38.592 { 00:13:38.592 "name": "BaseBdev3", 00:13:38.592 "uuid": "3f85961d-28ff-50f1-98e5-91975287e0c4", 00:13:38.592 "is_configured": true, 00:13:38.592 "data_offset": 2048, 00:13:38.592 "data_size": 63488 00:13:38.592 }, 00:13:38.592 { 00:13:38.592 "name": "BaseBdev4", 00:13:38.592 "uuid": "953e2ffa-1404-5445-8530-69485ed9649d", 00:13:38.592 "is_configured": true, 00:13:38.592 "data_offset": 2048, 00:13:38.592 "data_size": 63488 00:13:38.592 } 00:13:38.592 ] 00:13:38.592 }' 00:13:38.592 15:21:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:38.592 15:21:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:39.160 15:21:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:39.160 15:21:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.160 15:21:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:39.160 15:21:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:39.160 [2024-11-20 15:21:25.409968] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:39.160 15:21:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.160 15:21:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:13:39.160 15:21:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:39.160 15:21:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:39.160 15:21:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.160 15:21:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:39.160 15:21:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.160 15:21:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:13:39.160 15:21:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:13:39.160 15:21:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:13:39.160 15:21:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:13:39.160 15:21:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:13:39.160 15:21:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:39.160 15:21:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:13:39.160 15:21:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:39.160 15:21:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:39.160 15:21:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:39.160 15:21:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:13:39.160 15:21:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:39.160 15:21:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:39.160 15:21:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:13:39.420 [2024-11-20 15:21:25.689281] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:13:39.420 /dev/nbd0 00:13:39.420 15:21:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:39.420 15:21:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:39.420 15:21:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:39.420 15:21:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:13:39.420 15:21:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:39.420 15:21:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:39.420 15:21:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:39.420 15:21:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:13:39.420 15:21:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:39.420 15:21:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:39.420 15:21:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:39.420 1+0 records in 00:13:39.420 1+0 records out 00:13:39.420 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000337614 s, 12.1 MB/s 00:13:39.420 15:21:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:39.420 15:21:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:13:39.420 15:21:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:39.420 15:21:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:39.420 15:21:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:13:39.420 15:21:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:39.420 15:21:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:39.420 15:21:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:13:39.420 15:21:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:13:39.420 15:21:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:13:46.013 63488+0 records in 00:13:46.013 63488+0 records out 00:13:46.013 32505856 bytes (33 MB, 31 MiB) copied, 6.56617 s, 5.0 MB/s 00:13:46.013 15:21:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:46.013 15:21:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:46.013 15:21:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:46.013 15:21:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:46.013 15:21:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:13:46.013 15:21:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:46.013 15:21:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:46.272 [2024-11-20 15:21:32.538088] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:46.272 15:21:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:46.272 15:21:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:46.272 15:21:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:46.272 15:21:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:46.272 15:21:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:46.272 15:21:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:46.272 15:21:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:13:46.272 15:21:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:13:46.272 15:21:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:46.272 15:21:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.272 15:21:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:46.272 [2024-11-20 15:21:32.578129] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:46.272 15:21:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.272 15:21:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:46.272 15:21:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:46.272 15:21:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:46.272 15:21:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:46.272 15:21:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:46.272 15:21:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:46.272 15:21:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:46.272 15:21:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:46.272 15:21:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:46.272 15:21:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:46.272 15:21:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:46.272 15:21:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.272 15:21:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:46.272 15:21:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:46.272 15:21:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.272 15:21:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:46.272 "name": "raid_bdev1", 00:13:46.272 "uuid": "774beb96-93ed-4e30-92e0-e13de3c54bfd", 00:13:46.272 "strip_size_kb": 0, 00:13:46.272 "state": "online", 00:13:46.272 "raid_level": "raid1", 00:13:46.272 "superblock": true, 00:13:46.272 "num_base_bdevs": 4, 00:13:46.272 "num_base_bdevs_discovered": 3, 00:13:46.272 "num_base_bdevs_operational": 3, 00:13:46.272 "base_bdevs_list": [ 00:13:46.272 { 00:13:46.272 "name": null, 00:13:46.272 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:46.272 "is_configured": false, 00:13:46.272 "data_offset": 0, 00:13:46.272 "data_size": 63488 00:13:46.272 }, 00:13:46.272 { 00:13:46.272 "name": "BaseBdev2", 00:13:46.272 "uuid": "e1418be3-455c-5e7e-a7b8-69fb714dcbcc", 00:13:46.272 "is_configured": true, 00:13:46.272 "data_offset": 2048, 00:13:46.272 "data_size": 63488 00:13:46.272 }, 00:13:46.272 { 00:13:46.272 "name": "BaseBdev3", 00:13:46.272 "uuid": "3f85961d-28ff-50f1-98e5-91975287e0c4", 00:13:46.272 "is_configured": true, 00:13:46.272 "data_offset": 2048, 00:13:46.272 "data_size": 63488 00:13:46.272 }, 00:13:46.272 { 00:13:46.272 "name": "BaseBdev4", 00:13:46.272 "uuid": "953e2ffa-1404-5445-8530-69485ed9649d", 00:13:46.272 "is_configured": true, 00:13:46.272 "data_offset": 2048, 00:13:46.272 "data_size": 63488 00:13:46.272 } 00:13:46.272 ] 00:13:46.272 }' 00:13:46.272 15:21:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:46.272 15:21:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:46.840 15:21:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:46.840 15:21:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.840 15:21:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:46.840 [2024-11-20 15:21:33.017554] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:46.840 [2024-11-20 15:21:33.033329] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3500 00:13:46.840 15:21:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.840 15:21:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:46.840 [2024-11-20 15:21:33.035660] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:47.777 15:21:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:47.777 15:21:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:47.777 15:21:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:47.777 15:21:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:47.777 15:21:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:47.777 15:21:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:47.777 15:21:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:47.777 15:21:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.777 15:21:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:47.777 15:21:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.777 15:21:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:47.777 "name": "raid_bdev1", 00:13:47.777 "uuid": "774beb96-93ed-4e30-92e0-e13de3c54bfd", 00:13:47.777 "strip_size_kb": 0, 00:13:47.777 "state": "online", 00:13:47.777 "raid_level": "raid1", 00:13:47.777 "superblock": true, 00:13:47.777 "num_base_bdevs": 4, 00:13:47.777 "num_base_bdevs_discovered": 4, 00:13:47.777 "num_base_bdevs_operational": 4, 00:13:47.777 "process": { 00:13:47.777 "type": "rebuild", 00:13:47.777 "target": "spare", 00:13:47.777 "progress": { 00:13:47.777 "blocks": 20480, 00:13:47.777 "percent": 32 00:13:47.777 } 00:13:47.777 }, 00:13:47.777 "base_bdevs_list": [ 00:13:47.777 { 00:13:47.777 "name": "spare", 00:13:47.777 "uuid": "ddb5418b-7f39-5702-a760-1c3be5c0397a", 00:13:47.777 "is_configured": true, 00:13:47.777 "data_offset": 2048, 00:13:47.777 "data_size": 63488 00:13:47.777 }, 00:13:47.777 { 00:13:47.777 "name": "BaseBdev2", 00:13:47.777 "uuid": "e1418be3-455c-5e7e-a7b8-69fb714dcbcc", 00:13:47.777 "is_configured": true, 00:13:47.777 "data_offset": 2048, 00:13:47.777 "data_size": 63488 00:13:47.777 }, 00:13:47.777 { 00:13:47.777 "name": "BaseBdev3", 00:13:47.777 "uuid": "3f85961d-28ff-50f1-98e5-91975287e0c4", 00:13:47.777 "is_configured": true, 00:13:47.777 "data_offset": 2048, 00:13:47.777 "data_size": 63488 00:13:47.777 }, 00:13:47.777 { 00:13:47.777 "name": "BaseBdev4", 00:13:47.777 "uuid": "953e2ffa-1404-5445-8530-69485ed9649d", 00:13:47.777 "is_configured": true, 00:13:47.777 "data_offset": 2048, 00:13:47.777 "data_size": 63488 00:13:47.777 } 00:13:47.777 ] 00:13:47.777 }' 00:13:47.777 15:21:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:47.777 15:21:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:47.777 15:21:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:47.777 15:21:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:47.777 15:21:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:47.777 15:21:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.777 15:21:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:47.777 [2024-11-20 15:21:34.183141] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:47.777 [2024-11-20 15:21:34.241278] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:47.777 [2024-11-20 15:21:34.241366] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:47.777 [2024-11-20 15:21:34.241385] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:47.777 [2024-11-20 15:21:34.241396] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:48.036 15:21:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.036 15:21:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:48.036 15:21:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:48.036 15:21:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:48.036 15:21:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:48.036 15:21:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:48.036 15:21:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:48.036 15:21:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:48.036 15:21:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:48.036 15:21:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:48.036 15:21:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:48.036 15:21:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:48.036 15:21:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.036 15:21:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:48.036 15:21:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:48.036 15:21:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.036 15:21:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:48.036 "name": "raid_bdev1", 00:13:48.036 "uuid": "774beb96-93ed-4e30-92e0-e13de3c54bfd", 00:13:48.036 "strip_size_kb": 0, 00:13:48.036 "state": "online", 00:13:48.036 "raid_level": "raid1", 00:13:48.036 "superblock": true, 00:13:48.036 "num_base_bdevs": 4, 00:13:48.036 "num_base_bdevs_discovered": 3, 00:13:48.036 "num_base_bdevs_operational": 3, 00:13:48.036 "base_bdevs_list": [ 00:13:48.036 { 00:13:48.036 "name": null, 00:13:48.036 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:48.036 "is_configured": false, 00:13:48.036 "data_offset": 0, 00:13:48.036 "data_size": 63488 00:13:48.036 }, 00:13:48.036 { 00:13:48.036 "name": "BaseBdev2", 00:13:48.036 "uuid": "e1418be3-455c-5e7e-a7b8-69fb714dcbcc", 00:13:48.036 "is_configured": true, 00:13:48.036 "data_offset": 2048, 00:13:48.036 "data_size": 63488 00:13:48.036 }, 00:13:48.036 { 00:13:48.037 "name": "BaseBdev3", 00:13:48.037 "uuid": "3f85961d-28ff-50f1-98e5-91975287e0c4", 00:13:48.037 "is_configured": true, 00:13:48.037 "data_offset": 2048, 00:13:48.037 "data_size": 63488 00:13:48.037 }, 00:13:48.037 { 00:13:48.037 "name": "BaseBdev4", 00:13:48.037 "uuid": "953e2ffa-1404-5445-8530-69485ed9649d", 00:13:48.037 "is_configured": true, 00:13:48.037 "data_offset": 2048, 00:13:48.037 "data_size": 63488 00:13:48.037 } 00:13:48.037 ] 00:13:48.037 }' 00:13:48.037 15:21:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:48.037 15:21:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:48.296 15:21:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:48.296 15:21:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:48.296 15:21:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:48.296 15:21:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:48.296 15:21:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:48.296 15:21:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:48.296 15:21:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:48.296 15:21:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.296 15:21:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:48.296 15:21:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.296 15:21:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:48.296 "name": "raid_bdev1", 00:13:48.296 "uuid": "774beb96-93ed-4e30-92e0-e13de3c54bfd", 00:13:48.296 "strip_size_kb": 0, 00:13:48.296 "state": "online", 00:13:48.296 "raid_level": "raid1", 00:13:48.296 "superblock": true, 00:13:48.296 "num_base_bdevs": 4, 00:13:48.296 "num_base_bdevs_discovered": 3, 00:13:48.296 "num_base_bdevs_operational": 3, 00:13:48.296 "base_bdevs_list": [ 00:13:48.296 { 00:13:48.296 "name": null, 00:13:48.296 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:48.296 "is_configured": false, 00:13:48.296 "data_offset": 0, 00:13:48.296 "data_size": 63488 00:13:48.296 }, 00:13:48.296 { 00:13:48.296 "name": "BaseBdev2", 00:13:48.296 "uuid": "e1418be3-455c-5e7e-a7b8-69fb714dcbcc", 00:13:48.296 "is_configured": true, 00:13:48.296 "data_offset": 2048, 00:13:48.296 "data_size": 63488 00:13:48.296 }, 00:13:48.296 { 00:13:48.296 "name": "BaseBdev3", 00:13:48.296 "uuid": "3f85961d-28ff-50f1-98e5-91975287e0c4", 00:13:48.296 "is_configured": true, 00:13:48.296 "data_offset": 2048, 00:13:48.296 "data_size": 63488 00:13:48.296 }, 00:13:48.296 { 00:13:48.296 "name": "BaseBdev4", 00:13:48.296 "uuid": "953e2ffa-1404-5445-8530-69485ed9649d", 00:13:48.296 "is_configured": true, 00:13:48.296 "data_offset": 2048, 00:13:48.296 "data_size": 63488 00:13:48.296 } 00:13:48.296 ] 00:13:48.296 }' 00:13:48.296 15:21:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:48.554 15:21:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:48.554 15:21:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:48.554 15:21:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:48.554 15:21:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:48.554 15:21:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.554 15:21:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:48.554 [2024-11-20 15:21:34.859743] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:48.554 [2024-11-20 15:21:34.875146] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca35d0 00:13:48.554 15:21:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.554 15:21:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:48.554 [2024-11-20 15:21:34.877537] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:49.506 15:21:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:49.506 15:21:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:49.506 15:21:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:49.506 15:21:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:49.506 15:21:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:49.506 15:21:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:49.506 15:21:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.506 15:21:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:49.506 15:21:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:49.506 15:21:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.506 15:21:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:49.506 "name": "raid_bdev1", 00:13:49.506 "uuid": "774beb96-93ed-4e30-92e0-e13de3c54bfd", 00:13:49.506 "strip_size_kb": 0, 00:13:49.506 "state": "online", 00:13:49.506 "raid_level": "raid1", 00:13:49.506 "superblock": true, 00:13:49.506 "num_base_bdevs": 4, 00:13:49.506 "num_base_bdevs_discovered": 4, 00:13:49.506 "num_base_bdevs_operational": 4, 00:13:49.506 "process": { 00:13:49.506 "type": "rebuild", 00:13:49.506 "target": "spare", 00:13:49.506 "progress": { 00:13:49.506 "blocks": 20480, 00:13:49.506 "percent": 32 00:13:49.506 } 00:13:49.506 }, 00:13:49.506 "base_bdevs_list": [ 00:13:49.506 { 00:13:49.506 "name": "spare", 00:13:49.506 "uuid": "ddb5418b-7f39-5702-a760-1c3be5c0397a", 00:13:49.506 "is_configured": true, 00:13:49.506 "data_offset": 2048, 00:13:49.506 "data_size": 63488 00:13:49.506 }, 00:13:49.506 { 00:13:49.506 "name": "BaseBdev2", 00:13:49.506 "uuid": "e1418be3-455c-5e7e-a7b8-69fb714dcbcc", 00:13:49.506 "is_configured": true, 00:13:49.506 "data_offset": 2048, 00:13:49.506 "data_size": 63488 00:13:49.506 }, 00:13:49.506 { 00:13:49.506 "name": "BaseBdev3", 00:13:49.506 "uuid": "3f85961d-28ff-50f1-98e5-91975287e0c4", 00:13:49.506 "is_configured": true, 00:13:49.506 "data_offset": 2048, 00:13:49.506 "data_size": 63488 00:13:49.506 }, 00:13:49.506 { 00:13:49.506 "name": "BaseBdev4", 00:13:49.506 "uuid": "953e2ffa-1404-5445-8530-69485ed9649d", 00:13:49.506 "is_configured": true, 00:13:49.506 "data_offset": 2048, 00:13:49.506 "data_size": 63488 00:13:49.506 } 00:13:49.506 ] 00:13:49.506 }' 00:13:49.506 15:21:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:49.506 15:21:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:49.506 15:21:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:49.765 15:21:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:49.765 15:21:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:13:49.765 15:21:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:13:49.765 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:13:49.765 15:21:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:13:49.765 15:21:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:13:49.765 15:21:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:13:49.765 15:21:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:49.765 15:21:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.765 15:21:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:49.765 [2024-11-20 15:21:36.036602] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:49.765 [2024-11-20 15:21:36.183308] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000ca35d0 00:13:49.765 15:21:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.765 15:21:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:13:49.765 15:21:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:13:49.765 15:21:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:49.765 15:21:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:49.765 15:21:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:49.765 15:21:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:49.765 15:21:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:49.765 15:21:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:49.765 15:21:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:49.765 15:21:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.765 15:21:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:49.765 15:21:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.765 15:21:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:49.765 "name": "raid_bdev1", 00:13:49.765 "uuid": "774beb96-93ed-4e30-92e0-e13de3c54bfd", 00:13:49.765 "strip_size_kb": 0, 00:13:49.765 "state": "online", 00:13:49.765 "raid_level": "raid1", 00:13:49.765 "superblock": true, 00:13:49.765 "num_base_bdevs": 4, 00:13:49.765 "num_base_bdevs_discovered": 3, 00:13:49.765 "num_base_bdevs_operational": 3, 00:13:49.765 "process": { 00:13:49.765 "type": "rebuild", 00:13:49.765 "target": "spare", 00:13:49.765 "progress": { 00:13:49.765 "blocks": 24576, 00:13:49.765 "percent": 38 00:13:49.765 } 00:13:49.765 }, 00:13:49.765 "base_bdevs_list": [ 00:13:49.765 { 00:13:49.765 "name": "spare", 00:13:49.765 "uuid": "ddb5418b-7f39-5702-a760-1c3be5c0397a", 00:13:49.765 "is_configured": true, 00:13:49.765 "data_offset": 2048, 00:13:49.765 "data_size": 63488 00:13:49.765 }, 00:13:49.765 { 00:13:49.765 "name": null, 00:13:49.765 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:49.765 "is_configured": false, 00:13:49.765 "data_offset": 0, 00:13:49.765 "data_size": 63488 00:13:49.765 }, 00:13:49.765 { 00:13:49.765 "name": "BaseBdev3", 00:13:49.765 "uuid": "3f85961d-28ff-50f1-98e5-91975287e0c4", 00:13:49.765 "is_configured": true, 00:13:49.765 "data_offset": 2048, 00:13:49.765 "data_size": 63488 00:13:49.765 }, 00:13:49.765 { 00:13:49.765 "name": "BaseBdev4", 00:13:49.765 "uuid": "953e2ffa-1404-5445-8530-69485ed9649d", 00:13:49.765 "is_configured": true, 00:13:49.765 "data_offset": 2048, 00:13:49.765 "data_size": 63488 00:13:49.765 } 00:13:49.765 ] 00:13:49.766 }' 00:13:49.766 15:21:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:50.024 15:21:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:50.024 15:21:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:50.024 15:21:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:50.024 15:21:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=460 00:13:50.024 15:21:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:50.024 15:21:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:50.024 15:21:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:50.024 15:21:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:50.024 15:21:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:50.024 15:21:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:50.024 15:21:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:50.024 15:21:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.024 15:21:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:50.024 15:21:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:50.024 15:21:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.024 15:21:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:50.024 "name": "raid_bdev1", 00:13:50.024 "uuid": "774beb96-93ed-4e30-92e0-e13de3c54bfd", 00:13:50.024 "strip_size_kb": 0, 00:13:50.024 "state": "online", 00:13:50.024 "raid_level": "raid1", 00:13:50.024 "superblock": true, 00:13:50.024 "num_base_bdevs": 4, 00:13:50.024 "num_base_bdevs_discovered": 3, 00:13:50.024 "num_base_bdevs_operational": 3, 00:13:50.024 "process": { 00:13:50.024 "type": "rebuild", 00:13:50.024 "target": "spare", 00:13:50.024 "progress": { 00:13:50.024 "blocks": 26624, 00:13:50.024 "percent": 41 00:13:50.024 } 00:13:50.024 }, 00:13:50.024 "base_bdevs_list": [ 00:13:50.024 { 00:13:50.024 "name": "spare", 00:13:50.024 "uuid": "ddb5418b-7f39-5702-a760-1c3be5c0397a", 00:13:50.024 "is_configured": true, 00:13:50.024 "data_offset": 2048, 00:13:50.024 "data_size": 63488 00:13:50.024 }, 00:13:50.024 { 00:13:50.024 "name": null, 00:13:50.024 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:50.024 "is_configured": false, 00:13:50.024 "data_offset": 0, 00:13:50.024 "data_size": 63488 00:13:50.024 }, 00:13:50.024 { 00:13:50.024 "name": "BaseBdev3", 00:13:50.024 "uuid": "3f85961d-28ff-50f1-98e5-91975287e0c4", 00:13:50.024 "is_configured": true, 00:13:50.024 "data_offset": 2048, 00:13:50.024 "data_size": 63488 00:13:50.024 }, 00:13:50.024 { 00:13:50.025 "name": "BaseBdev4", 00:13:50.025 "uuid": "953e2ffa-1404-5445-8530-69485ed9649d", 00:13:50.025 "is_configured": true, 00:13:50.025 "data_offset": 2048, 00:13:50.025 "data_size": 63488 00:13:50.025 } 00:13:50.025 ] 00:13:50.025 }' 00:13:50.025 15:21:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:50.025 15:21:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:50.025 15:21:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:50.025 15:21:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:50.025 15:21:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:51.414 15:21:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:51.414 15:21:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:51.414 15:21:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:51.414 15:21:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:51.414 15:21:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:51.414 15:21:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:51.414 15:21:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:51.414 15:21:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.414 15:21:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:51.414 15:21:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:51.414 15:21:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.414 15:21:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:51.414 "name": "raid_bdev1", 00:13:51.414 "uuid": "774beb96-93ed-4e30-92e0-e13de3c54bfd", 00:13:51.414 "strip_size_kb": 0, 00:13:51.414 "state": "online", 00:13:51.414 "raid_level": "raid1", 00:13:51.414 "superblock": true, 00:13:51.414 "num_base_bdevs": 4, 00:13:51.414 "num_base_bdevs_discovered": 3, 00:13:51.414 "num_base_bdevs_operational": 3, 00:13:51.414 "process": { 00:13:51.414 "type": "rebuild", 00:13:51.414 "target": "spare", 00:13:51.414 "progress": { 00:13:51.414 "blocks": 49152, 00:13:51.414 "percent": 77 00:13:51.414 } 00:13:51.414 }, 00:13:51.414 "base_bdevs_list": [ 00:13:51.414 { 00:13:51.414 "name": "spare", 00:13:51.414 "uuid": "ddb5418b-7f39-5702-a760-1c3be5c0397a", 00:13:51.414 "is_configured": true, 00:13:51.414 "data_offset": 2048, 00:13:51.414 "data_size": 63488 00:13:51.414 }, 00:13:51.414 { 00:13:51.414 "name": null, 00:13:51.414 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:51.414 "is_configured": false, 00:13:51.414 "data_offset": 0, 00:13:51.414 "data_size": 63488 00:13:51.414 }, 00:13:51.414 { 00:13:51.414 "name": "BaseBdev3", 00:13:51.414 "uuid": "3f85961d-28ff-50f1-98e5-91975287e0c4", 00:13:51.414 "is_configured": true, 00:13:51.414 "data_offset": 2048, 00:13:51.414 "data_size": 63488 00:13:51.414 }, 00:13:51.414 { 00:13:51.414 "name": "BaseBdev4", 00:13:51.414 "uuid": "953e2ffa-1404-5445-8530-69485ed9649d", 00:13:51.414 "is_configured": true, 00:13:51.414 "data_offset": 2048, 00:13:51.414 "data_size": 63488 00:13:51.414 } 00:13:51.414 ] 00:13:51.414 }' 00:13:51.414 15:21:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:51.414 15:21:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:51.414 15:21:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:51.414 15:21:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:51.414 15:21:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:51.673 [2024-11-20 15:21:38.092471] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:51.673 [2024-11-20 15:21:38.092571] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:51.673 [2024-11-20 15:21:38.092734] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:52.242 15:21:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:52.242 15:21:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:52.242 15:21:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:52.242 15:21:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:52.242 15:21:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:52.242 15:21:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:52.242 15:21:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:52.242 15:21:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.242 15:21:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.242 15:21:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:52.242 15:21:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.242 15:21:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:52.242 "name": "raid_bdev1", 00:13:52.242 "uuid": "774beb96-93ed-4e30-92e0-e13de3c54bfd", 00:13:52.242 "strip_size_kb": 0, 00:13:52.242 "state": "online", 00:13:52.242 "raid_level": "raid1", 00:13:52.242 "superblock": true, 00:13:52.242 "num_base_bdevs": 4, 00:13:52.242 "num_base_bdevs_discovered": 3, 00:13:52.242 "num_base_bdevs_operational": 3, 00:13:52.242 "base_bdevs_list": [ 00:13:52.242 { 00:13:52.242 "name": "spare", 00:13:52.242 "uuid": "ddb5418b-7f39-5702-a760-1c3be5c0397a", 00:13:52.242 "is_configured": true, 00:13:52.242 "data_offset": 2048, 00:13:52.242 "data_size": 63488 00:13:52.242 }, 00:13:52.242 { 00:13:52.242 "name": null, 00:13:52.242 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:52.242 "is_configured": false, 00:13:52.242 "data_offset": 0, 00:13:52.242 "data_size": 63488 00:13:52.242 }, 00:13:52.242 { 00:13:52.242 "name": "BaseBdev3", 00:13:52.242 "uuid": "3f85961d-28ff-50f1-98e5-91975287e0c4", 00:13:52.242 "is_configured": true, 00:13:52.242 "data_offset": 2048, 00:13:52.242 "data_size": 63488 00:13:52.242 }, 00:13:52.242 { 00:13:52.242 "name": "BaseBdev4", 00:13:52.242 "uuid": "953e2ffa-1404-5445-8530-69485ed9649d", 00:13:52.242 "is_configured": true, 00:13:52.242 "data_offset": 2048, 00:13:52.242 "data_size": 63488 00:13:52.242 } 00:13:52.242 ] 00:13:52.242 }' 00:13:52.242 15:21:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:52.242 15:21:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:52.242 15:21:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:52.502 15:21:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:52.502 15:21:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:13:52.502 15:21:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:52.502 15:21:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:52.502 15:21:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:52.502 15:21:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:52.502 15:21:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:52.502 15:21:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:52.502 15:21:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:52.502 15:21:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.502 15:21:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.502 15:21:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.502 15:21:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:52.502 "name": "raid_bdev1", 00:13:52.502 "uuid": "774beb96-93ed-4e30-92e0-e13de3c54bfd", 00:13:52.502 "strip_size_kb": 0, 00:13:52.502 "state": "online", 00:13:52.502 "raid_level": "raid1", 00:13:52.502 "superblock": true, 00:13:52.502 "num_base_bdevs": 4, 00:13:52.502 "num_base_bdevs_discovered": 3, 00:13:52.502 "num_base_bdevs_operational": 3, 00:13:52.502 "base_bdevs_list": [ 00:13:52.502 { 00:13:52.502 "name": "spare", 00:13:52.502 "uuid": "ddb5418b-7f39-5702-a760-1c3be5c0397a", 00:13:52.502 "is_configured": true, 00:13:52.502 "data_offset": 2048, 00:13:52.502 "data_size": 63488 00:13:52.502 }, 00:13:52.502 { 00:13:52.502 "name": null, 00:13:52.502 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:52.502 "is_configured": false, 00:13:52.502 "data_offset": 0, 00:13:52.502 "data_size": 63488 00:13:52.502 }, 00:13:52.502 { 00:13:52.502 "name": "BaseBdev3", 00:13:52.502 "uuid": "3f85961d-28ff-50f1-98e5-91975287e0c4", 00:13:52.502 "is_configured": true, 00:13:52.502 "data_offset": 2048, 00:13:52.502 "data_size": 63488 00:13:52.502 }, 00:13:52.502 { 00:13:52.502 "name": "BaseBdev4", 00:13:52.502 "uuid": "953e2ffa-1404-5445-8530-69485ed9649d", 00:13:52.502 "is_configured": true, 00:13:52.502 "data_offset": 2048, 00:13:52.502 "data_size": 63488 00:13:52.502 } 00:13:52.502 ] 00:13:52.502 }' 00:13:52.502 15:21:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:52.502 15:21:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:52.502 15:21:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:52.502 15:21:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:52.502 15:21:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:52.502 15:21:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:52.502 15:21:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:52.502 15:21:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:52.502 15:21:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:52.502 15:21:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:52.502 15:21:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:52.502 15:21:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:52.502 15:21:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:52.502 15:21:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:52.502 15:21:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:52.502 15:21:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:52.502 15:21:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.502 15:21:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.502 15:21:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.502 15:21:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:52.502 "name": "raid_bdev1", 00:13:52.502 "uuid": "774beb96-93ed-4e30-92e0-e13de3c54bfd", 00:13:52.502 "strip_size_kb": 0, 00:13:52.502 "state": "online", 00:13:52.502 "raid_level": "raid1", 00:13:52.502 "superblock": true, 00:13:52.502 "num_base_bdevs": 4, 00:13:52.502 "num_base_bdevs_discovered": 3, 00:13:52.502 "num_base_bdevs_operational": 3, 00:13:52.502 "base_bdevs_list": [ 00:13:52.502 { 00:13:52.502 "name": "spare", 00:13:52.502 "uuid": "ddb5418b-7f39-5702-a760-1c3be5c0397a", 00:13:52.502 "is_configured": true, 00:13:52.502 "data_offset": 2048, 00:13:52.502 "data_size": 63488 00:13:52.503 }, 00:13:52.503 { 00:13:52.503 "name": null, 00:13:52.503 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:52.503 "is_configured": false, 00:13:52.503 "data_offset": 0, 00:13:52.503 "data_size": 63488 00:13:52.503 }, 00:13:52.503 { 00:13:52.503 "name": "BaseBdev3", 00:13:52.503 "uuid": "3f85961d-28ff-50f1-98e5-91975287e0c4", 00:13:52.503 "is_configured": true, 00:13:52.503 "data_offset": 2048, 00:13:52.503 "data_size": 63488 00:13:52.503 }, 00:13:52.503 { 00:13:52.503 "name": "BaseBdev4", 00:13:52.503 "uuid": "953e2ffa-1404-5445-8530-69485ed9649d", 00:13:52.503 "is_configured": true, 00:13:52.503 "data_offset": 2048, 00:13:52.503 "data_size": 63488 00:13:52.503 } 00:13:52.503 ] 00:13:52.503 }' 00:13:52.503 15:21:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:52.503 15:21:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:53.070 15:21:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:53.070 15:21:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.070 15:21:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:53.070 [2024-11-20 15:21:39.306889] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:53.071 [2024-11-20 15:21:39.306934] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:53.071 [2024-11-20 15:21:39.307023] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:53.071 [2024-11-20 15:21:39.307108] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:53.071 [2024-11-20 15:21:39.307127] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:53.071 15:21:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.071 15:21:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:53.071 15:21:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.071 15:21:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:53.071 15:21:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:13:53.071 15:21:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.071 15:21:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:53.071 15:21:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:53.071 15:21:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:13:53.071 15:21:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:13:53.071 15:21:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:53.071 15:21:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:13:53.071 15:21:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:53.071 15:21:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:53.071 15:21:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:53.071 15:21:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:13:53.071 15:21:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:53.071 15:21:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:53.071 15:21:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:13:53.329 /dev/nbd0 00:13:53.329 15:21:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:53.329 15:21:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:53.329 15:21:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:53.329 15:21:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:13:53.329 15:21:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:53.329 15:21:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:53.329 15:21:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:53.329 15:21:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:13:53.329 15:21:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:53.329 15:21:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:53.329 15:21:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:53.329 1+0 records in 00:13:53.329 1+0 records out 00:13:53.329 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000417111 s, 9.8 MB/s 00:13:53.329 15:21:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:53.329 15:21:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:13:53.329 15:21:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:53.329 15:21:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:53.329 15:21:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:13:53.329 15:21:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:53.329 15:21:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:53.329 15:21:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:13:53.589 /dev/nbd1 00:13:53.589 15:21:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:53.589 15:21:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:53.589 15:21:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:13:53.589 15:21:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:13:53.589 15:21:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:53.589 15:21:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:53.589 15:21:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:13:53.589 15:21:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:13:53.589 15:21:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:53.589 15:21:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:53.589 15:21:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:53.589 1+0 records in 00:13:53.589 1+0 records out 00:13:53.589 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00048607 s, 8.4 MB/s 00:13:53.589 15:21:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:53.589 15:21:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:13:53.589 15:21:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:53.589 15:21:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:53.589 15:21:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:13:53.589 15:21:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:53.589 15:21:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:53.589 15:21:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:13:53.848 15:21:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:13:53.848 15:21:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:53.848 15:21:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:53.848 15:21:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:53.848 15:21:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:13:53.848 15:21:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:53.848 15:21:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:54.108 15:21:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:54.108 15:21:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:54.108 15:21:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:54.108 15:21:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:54.108 15:21:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:54.108 15:21:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:54.108 15:21:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:13:54.108 15:21:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:13:54.108 15:21:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:54.108 15:21:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:54.367 15:21:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:54.367 15:21:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:54.367 15:21:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:54.367 15:21:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:54.367 15:21:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:54.367 15:21:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:54.367 15:21:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:13:54.367 15:21:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:13:54.367 15:21:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:13:54.367 15:21:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:13:54.367 15:21:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.367 15:21:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.367 15:21:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.367 15:21:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:54.367 15:21:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.367 15:21:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.367 [2024-11-20 15:21:40.639625] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:54.367 [2024-11-20 15:21:40.639723] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:54.367 [2024-11-20 15:21:40.639751] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:13:54.367 [2024-11-20 15:21:40.639763] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:54.367 [2024-11-20 15:21:40.642388] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:54.367 [2024-11-20 15:21:40.642441] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:54.367 [2024-11-20 15:21:40.642551] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:54.367 [2024-11-20 15:21:40.642609] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:54.367 [2024-11-20 15:21:40.642803] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:54.367 [2024-11-20 15:21:40.642901] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:54.367 spare 00:13:54.367 15:21:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.367 15:21:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:13:54.367 15:21:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.367 15:21:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.367 [2024-11-20 15:21:40.742839] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:13:54.367 [2024-11-20 15:21:40.742889] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:54.367 [2024-11-20 15:21:40.743271] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:13:54.367 [2024-11-20 15:21:40.743476] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:13:54.367 [2024-11-20 15:21:40.743501] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:13:54.367 [2024-11-20 15:21:40.743734] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:54.367 15:21:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.367 15:21:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:54.367 15:21:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:54.367 15:21:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:54.367 15:21:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:54.367 15:21:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:54.367 15:21:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:54.367 15:21:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:54.367 15:21:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:54.367 15:21:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:54.367 15:21:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:54.367 15:21:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:54.367 15:21:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:54.367 15:21:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.367 15:21:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.367 15:21:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.367 15:21:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:54.367 "name": "raid_bdev1", 00:13:54.367 "uuid": "774beb96-93ed-4e30-92e0-e13de3c54bfd", 00:13:54.367 "strip_size_kb": 0, 00:13:54.367 "state": "online", 00:13:54.368 "raid_level": "raid1", 00:13:54.368 "superblock": true, 00:13:54.368 "num_base_bdevs": 4, 00:13:54.368 "num_base_bdevs_discovered": 3, 00:13:54.368 "num_base_bdevs_operational": 3, 00:13:54.368 "base_bdevs_list": [ 00:13:54.368 { 00:13:54.368 "name": "spare", 00:13:54.368 "uuid": "ddb5418b-7f39-5702-a760-1c3be5c0397a", 00:13:54.368 "is_configured": true, 00:13:54.368 "data_offset": 2048, 00:13:54.368 "data_size": 63488 00:13:54.368 }, 00:13:54.368 { 00:13:54.368 "name": null, 00:13:54.368 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:54.368 "is_configured": false, 00:13:54.368 "data_offset": 2048, 00:13:54.368 "data_size": 63488 00:13:54.368 }, 00:13:54.368 { 00:13:54.368 "name": "BaseBdev3", 00:13:54.368 "uuid": "3f85961d-28ff-50f1-98e5-91975287e0c4", 00:13:54.368 "is_configured": true, 00:13:54.368 "data_offset": 2048, 00:13:54.368 "data_size": 63488 00:13:54.368 }, 00:13:54.368 { 00:13:54.368 "name": "BaseBdev4", 00:13:54.368 "uuid": "953e2ffa-1404-5445-8530-69485ed9649d", 00:13:54.368 "is_configured": true, 00:13:54.368 "data_offset": 2048, 00:13:54.368 "data_size": 63488 00:13:54.368 } 00:13:54.368 ] 00:13:54.368 }' 00:13:54.368 15:21:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:54.368 15:21:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.936 15:21:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:54.936 15:21:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:54.936 15:21:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:54.936 15:21:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:54.936 15:21:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:54.936 15:21:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:54.936 15:21:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.936 15:21:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:54.936 15:21:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.936 15:21:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.937 15:21:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:54.937 "name": "raid_bdev1", 00:13:54.937 "uuid": "774beb96-93ed-4e30-92e0-e13de3c54bfd", 00:13:54.937 "strip_size_kb": 0, 00:13:54.937 "state": "online", 00:13:54.937 "raid_level": "raid1", 00:13:54.937 "superblock": true, 00:13:54.937 "num_base_bdevs": 4, 00:13:54.937 "num_base_bdevs_discovered": 3, 00:13:54.937 "num_base_bdevs_operational": 3, 00:13:54.937 "base_bdevs_list": [ 00:13:54.937 { 00:13:54.937 "name": "spare", 00:13:54.937 "uuid": "ddb5418b-7f39-5702-a760-1c3be5c0397a", 00:13:54.937 "is_configured": true, 00:13:54.937 "data_offset": 2048, 00:13:54.937 "data_size": 63488 00:13:54.937 }, 00:13:54.937 { 00:13:54.937 "name": null, 00:13:54.937 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:54.937 "is_configured": false, 00:13:54.937 "data_offset": 2048, 00:13:54.937 "data_size": 63488 00:13:54.937 }, 00:13:54.937 { 00:13:54.937 "name": "BaseBdev3", 00:13:54.937 "uuid": "3f85961d-28ff-50f1-98e5-91975287e0c4", 00:13:54.937 "is_configured": true, 00:13:54.937 "data_offset": 2048, 00:13:54.937 "data_size": 63488 00:13:54.937 }, 00:13:54.937 { 00:13:54.937 "name": "BaseBdev4", 00:13:54.937 "uuid": "953e2ffa-1404-5445-8530-69485ed9649d", 00:13:54.937 "is_configured": true, 00:13:54.937 "data_offset": 2048, 00:13:54.937 "data_size": 63488 00:13:54.937 } 00:13:54.937 ] 00:13:54.937 }' 00:13:54.937 15:21:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:54.937 15:21:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:54.937 15:21:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:54.937 15:21:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:54.937 15:21:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:54.937 15:21:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:13:54.937 15:21:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.937 15:21:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.937 15:21:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.937 15:21:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:13:54.937 15:21:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:54.937 15:21:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.937 15:21:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.937 [2024-11-20 15:21:41.346896] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:54.937 15:21:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.937 15:21:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:54.937 15:21:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:54.937 15:21:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:54.937 15:21:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:54.937 15:21:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:54.937 15:21:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:54.937 15:21:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:54.937 15:21:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:54.937 15:21:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:54.937 15:21:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:54.937 15:21:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:54.937 15:21:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.937 15:21:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.937 15:21:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:54.937 15:21:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.937 15:21:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:54.937 "name": "raid_bdev1", 00:13:54.937 "uuid": "774beb96-93ed-4e30-92e0-e13de3c54bfd", 00:13:54.937 "strip_size_kb": 0, 00:13:54.937 "state": "online", 00:13:54.937 "raid_level": "raid1", 00:13:54.937 "superblock": true, 00:13:54.937 "num_base_bdevs": 4, 00:13:54.937 "num_base_bdevs_discovered": 2, 00:13:54.937 "num_base_bdevs_operational": 2, 00:13:54.937 "base_bdevs_list": [ 00:13:54.937 { 00:13:54.937 "name": null, 00:13:54.937 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:54.937 "is_configured": false, 00:13:54.937 "data_offset": 0, 00:13:54.937 "data_size": 63488 00:13:54.937 }, 00:13:54.937 { 00:13:54.937 "name": null, 00:13:54.937 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:54.937 "is_configured": false, 00:13:54.937 "data_offset": 2048, 00:13:54.937 "data_size": 63488 00:13:54.937 }, 00:13:54.937 { 00:13:54.937 "name": "BaseBdev3", 00:13:54.937 "uuid": "3f85961d-28ff-50f1-98e5-91975287e0c4", 00:13:54.937 "is_configured": true, 00:13:54.937 "data_offset": 2048, 00:13:54.937 "data_size": 63488 00:13:54.937 }, 00:13:54.937 { 00:13:54.937 "name": "BaseBdev4", 00:13:54.937 "uuid": "953e2ffa-1404-5445-8530-69485ed9649d", 00:13:54.937 "is_configured": true, 00:13:54.937 "data_offset": 2048, 00:13:54.937 "data_size": 63488 00:13:54.937 } 00:13:54.937 ] 00:13:54.937 }' 00:13:54.937 15:21:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:54.937 15:21:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.505 15:21:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:55.505 15:21:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.505 15:21:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.505 [2024-11-20 15:21:41.806484] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:55.505 [2024-11-20 15:21:41.806724] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:13:55.505 [2024-11-20 15:21:41.806759] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:55.505 [2024-11-20 15:21:41.806809] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:55.505 [2024-11-20 15:21:41.822127] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1d50 00:13:55.505 15:21:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.505 15:21:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:13:55.505 [2024-11-20 15:21:41.824491] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:56.441 15:21:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:56.441 15:21:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:56.441 15:21:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:56.441 15:21:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:56.441 15:21:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:56.441 15:21:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:56.441 15:21:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.441 15:21:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:56.441 15:21:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.441 15:21:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.441 15:21:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:56.441 "name": "raid_bdev1", 00:13:56.441 "uuid": "774beb96-93ed-4e30-92e0-e13de3c54bfd", 00:13:56.441 "strip_size_kb": 0, 00:13:56.441 "state": "online", 00:13:56.441 "raid_level": "raid1", 00:13:56.441 "superblock": true, 00:13:56.441 "num_base_bdevs": 4, 00:13:56.441 "num_base_bdevs_discovered": 3, 00:13:56.441 "num_base_bdevs_operational": 3, 00:13:56.441 "process": { 00:13:56.441 "type": "rebuild", 00:13:56.441 "target": "spare", 00:13:56.441 "progress": { 00:13:56.441 "blocks": 20480, 00:13:56.441 "percent": 32 00:13:56.441 } 00:13:56.441 }, 00:13:56.441 "base_bdevs_list": [ 00:13:56.441 { 00:13:56.441 "name": "spare", 00:13:56.441 "uuid": "ddb5418b-7f39-5702-a760-1c3be5c0397a", 00:13:56.441 "is_configured": true, 00:13:56.441 "data_offset": 2048, 00:13:56.441 "data_size": 63488 00:13:56.441 }, 00:13:56.441 { 00:13:56.441 "name": null, 00:13:56.441 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:56.441 "is_configured": false, 00:13:56.441 "data_offset": 2048, 00:13:56.441 "data_size": 63488 00:13:56.441 }, 00:13:56.441 { 00:13:56.441 "name": "BaseBdev3", 00:13:56.441 "uuid": "3f85961d-28ff-50f1-98e5-91975287e0c4", 00:13:56.441 "is_configured": true, 00:13:56.441 "data_offset": 2048, 00:13:56.441 "data_size": 63488 00:13:56.441 }, 00:13:56.441 { 00:13:56.441 "name": "BaseBdev4", 00:13:56.441 "uuid": "953e2ffa-1404-5445-8530-69485ed9649d", 00:13:56.441 "is_configured": true, 00:13:56.441 "data_offset": 2048, 00:13:56.441 "data_size": 63488 00:13:56.441 } 00:13:56.441 ] 00:13:56.441 }' 00:13:56.441 15:21:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:56.441 15:21:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:56.701 15:21:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:56.701 15:21:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:56.701 15:21:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:13:56.701 15:21:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.701 15:21:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.701 [2024-11-20 15:21:42.971571] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:56.701 [2024-11-20 15:21:43.030172] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:56.701 [2024-11-20 15:21:43.030255] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:56.701 [2024-11-20 15:21:43.030276] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:56.701 [2024-11-20 15:21:43.030285] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:56.701 15:21:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.701 15:21:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:56.701 15:21:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:56.701 15:21:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:56.701 15:21:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:56.701 15:21:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:56.701 15:21:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:56.701 15:21:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:56.701 15:21:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:56.701 15:21:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:56.701 15:21:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:56.701 15:21:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:56.701 15:21:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.701 15:21:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:56.701 15:21:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.701 15:21:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.701 15:21:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:56.701 "name": "raid_bdev1", 00:13:56.701 "uuid": "774beb96-93ed-4e30-92e0-e13de3c54bfd", 00:13:56.701 "strip_size_kb": 0, 00:13:56.701 "state": "online", 00:13:56.701 "raid_level": "raid1", 00:13:56.701 "superblock": true, 00:13:56.701 "num_base_bdevs": 4, 00:13:56.701 "num_base_bdevs_discovered": 2, 00:13:56.701 "num_base_bdevs_operational": 2, 00:13:56.701 "base_bdevs_list": [ 00:13:56.701 { 00:13:56.701 "name": null, 00:13:56.701 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:56.701 "is_configured": false, 00:13:56.701 "data_offset": 0, 00:13:56.701 "data_size": 63488 00:13:56.701 }, 00:13:56.701 { 00:13:56.701 "name": null, 00:13:56.701 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:56.701 "is_configured": false, 00:13:56.701 "data_offset": 2048, 00:13:56.701 "data_size": 63488 00:13:56.701 }, 00:13:56.701 { 00:13:56.701 "name": "BaseBdev3", 00:13:56.701 "uuid": "3f85961d-28ff-50f1-98e5-91975287e0c4", 00:13:56.701 "is_configured": true, 00:13:56.701 "data_offset": 2048, 00:13:56.701 "data_size": 63488 00:13:56.701 }, 00:13:56.701 { 00:13:56.701 "name": "BaseBdev4", 00:13:56.701 "uuid": "953e2ffa-1404-5445-8530-69485ed9649d", 00:13:56.701 "is_configured": true, 00:13:56.701 "data_offset": 2048, 00:13:56.701 "data_size": 63488 00:13:56.701 } 00:13:56.701 ] 00:13:56.701 }' 00:13:56.701 15:21:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:56.701 15:21:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.270 15:21:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:57.270 15:21:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.270 15:21:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.270 [2024-11-20 15:21:43.504696] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:57.270 [2024-11-20 15:21:43.504779] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:57.270 [2024-11-20 15:21:43.504815] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:13:57.270 [2024-11-20 15:21:43.504828] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:57.270 [2024-11-20 15:21:43.505349] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:57.270 [2024-11-20 15:21:43.505376] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:57.270 [2024-11-20 15:21:43.505476] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:57.270 [2024-11-20 15:21:43.505490] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:13:57.270 [2024-11-20 15:21:43.505506] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:57.270 [2024-11-20 15:21:43.505532] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:57.270 [2024-11-20 15:21:43.520376] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1e20 00:13:57.270 spare 00:13:57.270 15:21:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.270 15:21:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:13:57.270 [2024-11-20 15:21:43.522560] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:58.206 15:21:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:58.206 15:21:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:58.206 15:21:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:58.206 15:21:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:58.206 15:21:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:58.206 15:21:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:58.206 15:21:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:58.206 15:21:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.206 15:21:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:58.206 15:21:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.206 15:21:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:58.206 "name": "raid_bdev1", 00:13:58.206 "uuid": "774beb96-93ed-4e30-92e0-e13de3c54bfd", 00:13:58.206 "strip_size_kb": 0, 00:13:58.206 "state": "online", 00:13:58.206 "raid_level": "raid1", 00:13:58.206 "superblock": true, 00:13:58.206 "num_base_bdevs": 4, 00:13:58.206 "num_base_bdevs_discovered": 3, 00:13:58.206 "num_base_bdevs_operational": 3, 00:13:58.206 "process": { 00:13:58.206 "type": "rebuild", 00:13:58.206 "target": "spare", 00:13:58.206 "progress": { 00:13:58.206 "blocks": 20480, 00:13:58.206 "percent": 32 00:13:58.206 } 00:13:58.206 }, 00:13:58.206 "base_bdevs_list": [ 00:13:58.206 { 00:13:58.206 "name": "spare", 00:13:58.206 "uuid": "ddb5418b-7f39-5702-a760-1c3be5c0397a", 00:13:58.207 "is_configured": true, 00:13:58.207 "data_offset": 2048, 00:13:58.207 "data_size": 63488 00:13:58.207 }, 00:13:58.207 { 00:13:58.207 "name": null, 00:13:58.207 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:58.207 "is_configured": false, 00:13:58.207 "data_offset": 2048, 00:13:58.207 "data_size": 63488 00:13:58.207 }, 00:13:58.207 { 00:13:58.207 "name": "BaseBdev3", 00:13:58.207 "uuid": "3f85961d-28ff-50f1-98e5-91975287e0c4", 00:13:58.207 "is_configured": true, 00:13:58.207 "data_offset": 2048, 00:13:58.207 "data_size": 63488 00:13:58.207 }, 00:13:58.207 { 00:13:58.207 "name": "BaseBdev4", 00:13:58.207 "uuid": "953e2ffa-1404-5445-8530-69485ed9649d", 00:13:58.207 "is_configured": true, 00:13:58.207 "data_offset": 2048, 00:13:58.207 "data_size": 63488 00:13:58.207 } 00:13:58.207 ] 00:13:58.207 }' 00:13:58.207 15:21:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:58.207 15:21:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:58.207 15:21:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:58.207 15:21:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:58.207 15:21:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:13:58.207 15:21:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.207 15:21:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:58.207 [2024-11-20 15:21:44.654867] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:58.466 [2024-11-20 15:21:44.728072] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:58.466 [2024-11-20 15:21:44.728149] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:58.466 [2024-11-20 15:21:44.728165] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:58.466 [2024-11-20 15:21:44.728176] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:58.466 15:21:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.466 15:21:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:58.466 15:21:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:58.466 15:21:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:58.466 15:21:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:58.466 15:21:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:58.466 15:21:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:58.466 15:21:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:58.466 15:21:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:58.466 15:21:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:58.466 15:21:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:58.466 15:21:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:58.466 15:21:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.466 15:21:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:58.466 15:21:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:58.466 15:21:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.466 15:21:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:58.466 "name": "raid_bdev1", 00:13:58.466 "uuid": "774beb96-93ed-4e30-92e0-e13de3c54bfd", 00:13:58.466 "strip_size_kb": 0, 00:13:58.466 "state": "online", 00:13:58.466 "raid_level": "raid1", 00:13:58.466 "superblock": true, 00:13:58.466 "num_base_bdevs": 4, 00:13:58.466 "num_base_bdevs_discovered": 2, 00:13:58.466 "num_base_bdevs_operational": 2, 00:13:58.466 "base_bdevs_list": [ 00:13:58.466 { 00:13:58.466 "name": null, 00:13:58.466 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:58.466 "is_configured": false, 00:13:58.466 "data_offset": 0, 00:13:58.466 "data_size": 63488 00:13:58.466 }, 00:13:58.466 { 00:13:58.466 "name": null, 00:13:58.466 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:58.466 "is_configured": false, 00:13:58.466 "data_offset": 2048, 00:13:58.466 "data_size": 63488 00:13:58.466 }, 00:13:58.466 { 00:13:58.466 "name": "BaseBdev3", 00:13:58.466 "uuid": "3f85961d-28ff-50f1-98e5-91975287e0c4", 00:13:58.466 "is_configured": true, 00:13:58.466 "data_offset": 2048, 00:13:58.466 "data_size": 63488 00:13:58.466 }, 00:13:58.466 { 00:13:58.466 "name": "BaseBdev4", 00:13:58.466 "uuid": "953e2ffa-1404-5445-8530-69485ed9649d", 00:13:58.466 "is_configured": true, 00:13:58.466 "data_offset": 2048, 00:13:58.466 "data_size": 63488 00:13:58.466 } 00:13:58.466 ] 00:13:58.466 }' 00:13:58.466 15:21:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:58.466 15:21:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:58.726 15:21:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:58.726 15:21:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:58.726 15:21:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:58.726 15:21:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:58.726 15:21:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:58.726 15:21:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:58.726 15:21:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.726 15:21:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:58.726 15:21:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:58.726 15:21:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.985 15:21:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:58.985 "name": "raid_bdev1", 00:13:58.985 "uuid": "774beb96-93ed-4e30-92e0-e13de3c54bfd", 00:13:58.985 "strip_size_kb": 0, 00:13:58.985 "state": "online", 00:13:58.985 "raid_level": "raid1", 00:13:58.985 "superblock": true, 00:13:58.985 "num_base_bdevs": 4, 00:13:58.985 "num_base_bdevs_discovered": 2, 00:13:58.985 "num_base_bdevs_operational": 2, 00:13:58.985 "base_bdevs_list": [ 00:13:58.985 { 00:13:58.985 "name": null, 00:13:58.985 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:58.985 "is_configured": false, 00:13:58.985 "data_offset": 0, 00:13:58.985 "data_size": 63488 00:13:58.985 }, 00:13:58.985 { 00:13:58.985 "name": null, 00:13:58.985 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:58.985 "is_configured": false, 00:13:58.985 "data_offset": 2048, 00:13:58.985 "data_size": 63488 00:13:58.985 }, 00:13:58.985 { 00:13:58.985 "name": "BaseBdev3", 00:13:58.985 "uuid": "3f85961d-28ff-50f1-98e5-91975287e0c4", 00:13:58.985 "is_configured": true, 00:13:58.985 "data_offset": 2048, 00:13:58.985 "data_size": 63488 00:13:58.985 }, 00:13:58.985 { 00:13:58.985 "name": "BaseBdev4", 00:13:58.985 "uuid": "953e2ffa-1404-5445-8530-69485ed9649d", 00:13:58.985 "is_configured": true, 00:13:58.985 "data_offset": 2048, 00:13:58.985 "data_size": 63488 00:13:58.985 } 00:13:58.985 ] 00:13:58.985 }' 00:13:58.985 15:21:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:58.985 15:21:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:58.985 15:21:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:58.985 15:21:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:58.985 15:21:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:13:58.985 15:21:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.985 15:21:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:58.985 15:21:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.985 15:21:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:58.985 15:21:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.985 15:21:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:58.985 [2024-11-20 15:21:45.321812] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:58.985 [2024-11-20 15:21:45.321882] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:58.985 [2024-11-20 15:21:45.321906] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:13:58.985 [2024-11-20 15:21:45.321920] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:58.985 [2024-11-20 15:21:45.322367] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:58.985 [2024-11-20 15:21:45.322397] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:58.985 [2024-11-20 15:21:45.322481] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:13:58.985 [2024-11-20 15:21:45.322498] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:13:58.985 [2024-11-20 15:21:45.322507] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:58.985 [2024-11-20 15:21:45.322533] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:13:58.985 BaseBdev1 00:13:58.985 15:21:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.985 15:21:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:13:59.922 15:21:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:59.922 15:21:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:59.922 15:21:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:59.922 15:21:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:59.922 15:21:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:59.922 15:21:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:59.922 15:21:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:59.922 15:21:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:59.922 15:21:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:59.922 15:21:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:59.922 15:21:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:59.922 15:21:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:59.922 15:21:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.922 15:21:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:59.922 15:21:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.922 15:21:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:59.922 "name": "raid_bdev1", 00:13:59.922 "uuid": "774beb96-93ed-4e30-92e0-e13de3c54bfd", 00:13:59.922 "strip_size_kb": 0, 00:13:59.922 "state": "online", 00:13:59.922 "raid_level": "raid1", 00:13:59.922 "superblock": true, 00:13:59.923 "num_base_bdevs": 4, 00:13:59.923 "num_base_bdevs_discovered": 2, 00:13:59.923 "num_base_bdevs_operational": 2, 00:13:59.923 "base_bdevs_list": [ 00:13:59.923 { 00:13:59.923 "name": null, 00:13:59.923 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:59.923 "is_configured": false, 00:13:59.923 "data_offset": 0, 00:13:59.923 "data_size": 63488 00:13:59.923 }, 00:13:59.923 { 00:13:59.923 "name": null, 00:13:59.923 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:59.923 "is_configured": false, 00:13:59.923 "data_offset": 2048, 00:13:59.923 "data_size": 63488 00:13:59.923 }, 00:13:59.923 { 00:13:59.923 "name": "BaseBdev3", 00:13:59.923 "uuid": "3f85961d-28ff-50f1-98e5-91975287e0c4", 00:13:59.923 "is_configured": true, 00:13:59.923 "data_offset": 2048, 00:13:59.923 "data_size": 63488 00:13:59.923 }, 00:13:59.923 { 00:13:59.923 "name": "BaseBdev4", 00:13:59.923 "uuid": "953e2ffa-1404-5445-8530-69485ed9649d", 00:13:59.923 "is_configured": true, 00:13:59.923 "data_offset": 2048, 00:13:59.923 "data_size": 63488 00:13:59.923 } 00:13:59.923 ] 00:13:59.923 }' 00:13:59.923 15:21:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:59.923 15:21:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:00.491 15:21:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:00.491 15:21:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:00.491 15:21:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:00.491 15:21:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:00.491 15:21:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:00.491 15:21:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:00.491 15:21:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.491 15:21:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:00.491 15:21:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:00.491 15:21:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.491 15:21:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:00.491 "name": "raid_bdev1", 00:14:00.491 "uuid": "774beb96-93ed-4e30-92e0-e13de3c54bfd", 00:14:00.491 "strip_size_kb": 0, 00:14:00.491 "state": "online", 00:14:00.491 "raid_level": "raid1", 00:14:00.491 "superblock": true, 00:14:00.491 "num_base_bdevs": 4, 00:14:00.491 "num_base_bdevs_discovered": 2, 00:14:00.491 "num_base_bdevs_operational": 2, 00:14:00.491 "base_bdevs_list": [ 00:14:00.491 { 00:14:00.491 "name": null, 00:14:00.491 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:00.491 "is_configured": false, 00:14:00.491 "data_offset": 0, 00:14:00.491 "data_size": 63488 00:14:00.491 }, 00:14:00.491 { 00:14:00.491 "name": null, 00:14:00.491 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:00.491 "is_configured": false, 00:14:00.491 "data_offset": 2048, 00:14:00.491 "data_size": 63488 00:14:00.491 }, 00:14:00.491 { 00:14:00.491 "name": "BaseBdev3", 00:14:00.491 "uuid": "3f85961d-28ff-50f1-98e5-91975287e0c4", 00:14:00.491 "is_configured": true, 00:14:00.491 "data_offset": 2048, 00:14:00.491 "data_size": 63488 00:14:00.491 }, 00:14:00.491 { 00:14:00.491 "name": "BaseBdev4", 00:14:00.491 "uuid": "953e2ffa-1404-5445-8530-69485ed9649d", 00:14:00.491 "is_configured": true, 00:14:00.491 "data_offset": 2048, 00:14:00.491 "data_size": 63488 00:14:00.491 } 00:14:00.491 ] 00:14:00.491 }' 00:14:00.491 15:21:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:00.491 15:21:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:00.491 15:21:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:00.491 15:21:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:00.491 15:21:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:00.491 15:21:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:14:00.491 15:21:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:00.491 15:21:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:14:00.491 15:21:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:00.491 15:21:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:14:00.491 15:21:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:00.491 15:21:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:00.491 15:21:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.491 15:21:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:00.492 [2024-11-20 15:21:46.904280] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:00.492 [2024-11-20 15:21:46.904509] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:14:00.492 [2024-11-20 15:21:46.904529] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:00.492 request: 00:14:00.492 { 00:14:00.492 "base_bdev": "BaseBdev1", 00:14:00.492 "raid_bdev": "raid_bdev1", 00:14:00.492 "method": "bdev_raid_add_base_bdev", 00:14:00.492 "req_id": 1 00:14:00.492 } 00:14:00.492 Got JSON-RPC error response 00:14:00.492 response: 00:14:00.492 { 00:14:00.492 "code": -22, 00:14:00.492 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:14:00.492 } 00:14:00.492 15:21:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:14:00.492 15:21:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:14:00.492 15:21:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:00.492 15:21:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:00.492 15:21:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:00.492 15:21:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:14:01.871 15:21:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:01.871 15:21:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:01.871 15:21:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:01.871 15:21:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:01.871 15:21:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:01.871 15:21:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:01.871 15:21:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:01.871 15:21:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:01.871 15:21:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:01.871 15:21:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:01.871 15:21:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:01.871 15:21:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:01.871 15:21:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.871 15:21:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:01.871 15:21:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.871 15:21:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:01.871 "name": "raid_bdev1", 00:14:01.871 "uuid": "774beb96-93ed-4e30-92e0-e13de3c54bfd", 00:14:01.871 "strip_size_kb": 0, 00:14:01.871 "state": "online", 00:14:01.871 "raid_level": "raid1", 00:14:01.871 "superblock": true, 00:14:01.871 "num_base_bdevs": 4, 00:14:01.871 "num_base_bdevs_discovered": 2, 00:14:01.871 "num_base_bdevs_operational": 2, 00:14:01.871 "base_bdevs_list": [ 00:14:01.871 { 00:14:01.871 "name": null, 00:14:01.871 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:01.871 "is_configured": false, 00:14:01.871 "data_offset": 0, 00:14:01.871 "data_size": 63488 00:14:01.871 }, 00:14:01.871 { 00:14:01.871 "name": null, 00:14:01.871 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:01.871 "is_configured": false, 00:14:01.871 "data_offset": 2048, 00:14:01.871 "data_size": 63488 00:14:01.871 }, 00:14:01.871 { 00:14:01.871 "name": "BaseBdev3", 00:14:01.871 "uuid": "3f85961d-28ff-50f1-98e5-91975287e0c4", 00:14:01.871 "is_configured": true, 00:14:01.871 "data_offset": 2048, 00:14:01.871 "data_size": 63488 00:14:01.871 }, 00:14:01.871 { 00:14:01.871 "name": "BaseBdev4", 00:14:01.871 "uuid": "953e2ffa-1404-5445-8530-69485ed9649d", 00:14:01.871 "is_configured": true, 00:14:01.871 "data_offset": 2048, 00:14:01.871 "data_size": 63488 00:14:01.871 } 00:14:01.871 ] 00:14:01.871 }' 00:14:01.871 15:21:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:01.871 15:21:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:02.131 15:21:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:02.131 15:21:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:02.131 15:21:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:02.131 15:21:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:02.131 15:21:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:02.131 15:21:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:02.131 15:21:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.131 15:21:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:02.131 15:21:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:02.131 15:21:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.131 15:21:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:02.131 "name": "raid_bdev1", 00:14:02.131 "uuid": "774beb96-93ed-4e30-92e0-e13de3c54bfd", 00:14:02.131 "strip_size_kb": 0, 00:14:02.131 "state": "online", 00:14:02.131 "raid_level": "raid1", 00:14:02.131 "superblock": true, 00:14:02.131 "num_base_bdevs": 4, 00:14:02.131 "num_base_bdevs_discovered": 2, 00:14:02.131 "num_base_bdevs_operational": 2, 00:14:02.131 "base_bdevs_list": [ 00:14:02.131 { 00:14:02.131 "name": null, 00:14:02.131 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:02.131 "is_configured": false, 00:14:02.131 "data_offset": 0, 00:14:02.131 "data_size": 63488 00:14:02.131 }, 00:14:02.131 { 00:14:02.131 "name": null, 00:14:02.131 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:02.131 "is_configured": false, 00:14:02.131 "data_offset": 2048, 00:14:02.131 "data_size": 63488 00:14:02.131 }, 00:14:02.131 { 00:14:02.131 "name": "BaseBdev3", 00:14:02.131 "uuid": "3f85961d-28ff-50f1-98e5-91975287e0c4", 00:14:02.131 "is_configured": true, 00:14:02.131 "data_offset": 2048, 00:14:02.131 "data_size": 63488 00:14:02.131 }, 00:14:02.131 { 00:14:02.131 "name": "BaseBdev4", 00:14:02.131 "uuid": "953e2ffa-1404-5445-8530-69485ed9649d", 00:14:02.131 "is_configured": true, 00:14:02.131 "data_offset": 2048, 00:14:02.131 "data_size": 63488 00:14:02.131 } 00:14:02.131 ] 00:14:02.131 }' 00:14:02.131 15:21:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:02.131 15:21:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:02.131 15:21:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:02.131 15:21:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:02.131 15:21:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 77801 00:14:02.131 15:21:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 77801 ']' 00:14:02.131 15:21:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 77801 00:14:02.131 15:21:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:14:02.131 15:21:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:02.131 15:21:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77801 00:14:02.131 15:21:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:02.131 15:21:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:02.131 15:21:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77801' 00:14:02.131 killing process with pid 77801 00:14:02.131 15:21:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 77801 00:14:02.131 Received shutdown signal, test time was about 60.000000 seconds 00:14:02.131 00:14:02.131 Latency(us) 00:14:02.131 [2024-11-20T15:21:48.613Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:02.131 [2024-11-20T15:21:48.613Z] =================================================================================================================== 00:14:02.131 [2024-11-20T15:21:48.613Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:02.131 [2024-11-20 15:21:48.578432] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:02.131 [2024-11-20 15:21:48.578572] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:02.131 [2024-11-20 15:21:48.578657] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:02.131 [2024-11-20 15:21:48.578670] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:14:02.131 15:21:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 77801 00:14:02.699 [2024-11-20 15:21:49.104674] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:04.076 15:21:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:14:04.076 00:14:04.076 real 0m26.618s 00:14:04.076 user 0m31.258s 00:14:04.076 sys 0m4.746s 00:14:04.076 15:21:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:04.076 15:21:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:04.076 ************************************ 00:14:04.076 END TEST raid_rebuild_test_sb 00:14:04.076 ************************************ 00:14:04.076 15:21:50 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 4 false true true 00:14:04.076 15:21:50 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:14:04.076 15:21:50 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:04.076 15:21:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:04.076 ************************************ 00:14:04.076 START TEST raid_rebuild_test_io 00:14:04.076 ************************************ 00:14:04.076 15:21:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 false true true 00:14:04.076 15:21:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:14:04.076 15:21:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:14:04.076 15:21:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:14:04.076 15:21:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:14:04.076 15:21:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:04.076 15:21:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:04.076 15:21:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:04.076 15:21:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:04.076 15:21:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:04.076 15:21:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:04.076 15:21:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:04.076 15:21:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:04.076 15:21:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:04.076 15:21:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:14:04.076 15:21:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:04.076 15:21:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:04.076 15:21:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:14:04.076 15:21:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:04.076 15:21:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:04.076 15:21:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:04.076 15:21:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:04.076 15:21:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:04.076 15:21:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:04.076 15:21:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:04.076 15:21:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:04.076 15:21:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:04.076 15:21:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:14:04.076 15:21:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:14:04.076 15:21:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:14:04.076 15:21:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=78571 00:14:04.076 15:21:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 78571 00:14:04.076 15:21:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:04.076 15:21:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # '[' -z 78571 ']' 00:14:04.076 15:21:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:04.076 15:21:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:04.076 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:04.076 15:21:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:04.076 15:21:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:04.076 15:21:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:04.076 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:04.076 Zero copy mechanism will not be used. 00:14:04.076 [2024-11-20 15:21:50.474481] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:14:04.076 [2024-11-20 15:21:50.474645] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78571 ] 00:14:04.335 [2024-11-20 15:21:50.660070] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:04.335 [2024-11-20 15:21:50.785335] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:04.594 [2024-11-20 15:21:51.006023] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:04.594 [2024-11-20 15:21:51.006063] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:05.164 15:21:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:05.164 15:21:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # return 0 00:14:05.164 15:21:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:05.164 15:21:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:05.164 15:21:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.164 15:21:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:05.164 BaseBdev1_malloc 00:14:05.164 15:21:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.164 15:21:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:05.164 15:21:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.164 15:21:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:05.164 [2024-11-20 15:21:51.452306] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:05.164 [2024-11-20 15:21:51.452393] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:05.164 [2024-11-20 15:21:51.452424] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:05.164 [2024-11-20 15:21:51.452440] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:05.164 [2024-11-20 15:21:51.455094] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:05.164 [2024-11-20 15:21:51.455180] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:05.164 BaseBdev1 00:14:05.164 15:21:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.164 15:21:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:05.164 15:21:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:05.164 15:21:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.164 15:21:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:05.164 BaseBdev2_malloc 00:14:05.164 15:21:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.164 15:21:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:05.164 15:21:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.164 15:21:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:05.164 [2024-11-20 15:21:51.509435] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:05.164 [2024-11-20 15:21:51.509513] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:05.164 [2024-11-20 15:21:51.509543] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:05.164 [2024-11-20 15:21:51.509574] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:05.164 [2024-11-20 15:21:51.512209] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:05.164 [2024-11-20 15:21:51.512258] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:05.164 BaseBdev2 00:14:05.164 15:21:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.164 15:21:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:05.164 15:21:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:05.164 15:21:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.164 15:21:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:05.164 BaseBdev3_malloc 00:14:05.164 15:21:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.164 15:21:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:14:05.165 15:21:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.165 15:21:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:05.165 [2024-11-20 15:21:51.587486] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:14:05.165 [2024-11-20 15:21:51.587725] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:05.165 [2024-11-20 15:21:51.587794] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:05.165 [2024-11-20 15:21:51.587879] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:05.165 [2024-11-20 15:21:51.590603] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:05.165 [2024-11-20 15:21:51.590783] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:05.165 BaseBdev3 00:14:05.165 15:21:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.165 15:21:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:05.165 15:21:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:14:05.165 15:21:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.165 15:21:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:05.165 BaseBdev4_malloc 00:14:05.424 15:21:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.424 15:21:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:14:05.424 15:21:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.424 15:21:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:05.424 [2024-11-20 15:21:51.647436] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:14:05.424 [2024-11-20 15:21:51.647691] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:05.424 [2024-11-20 15:21:51.647759] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:14:05.424 [2024-11-20 15:21:51.647872] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:05.424 [2024-11-20 15:21:51.650514] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:05.424 [2024-11-20 15:21:51.650706] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:14:05.424 BaseBdev4 00:14:05.424 15:21:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.424 15:21:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:05.424 15:21:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.424 15:21:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:05.424 spare_malloc 00:14:05.424 15:21:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.424 15:21:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:05.424 15:21:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.424 15:21:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:05.424 spare_delay 00:14:05.424 15:21:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.424 15:21:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:05.424 15:21:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.424 15:21:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:05.424 [2024-11-20 15:21:51.717007] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:05.424 [2024-11-20 15:21:51.717080] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:05.424 [2024-11-20 15:21:51.717106] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:14:05.424 [2024-11-20 15:21:51.717121] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:05.424 [2024-11-20 15:21:51.719789] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:05.424 [2024-11-20 15:21:51.719838] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:05.424 spare 00:14:05.424 15:21:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.424 15:21:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:14:05.424 15:21:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.424 15:21:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:05.424 [2024-11-20 15:21:51.729031] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:05.424 [2024-11-20 15:21:51.731496] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:05.424 [2024-11-20 15:21:51.731750] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:05.424 [2024-11-20 15:21:51.731819] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:05.424 [2024-11-20 15:21:51.731917] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:05.424 [2024-11-20 15:21:51.731935] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:14:05.424 [2024-11-20 15:21:51.732263] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:14:05.424 [2024-11-20 15:21:51.732458] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:05.424 [2024-11-20 15:21:51.732485] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:05.424 [2024-11-20 15:21:51.732715] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:05.424 15:21:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.424 15:21:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:14:05.424 15:21:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:05.425 15:21:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:05.425 15:21:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:05.425 15:21:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:05.425 15:21:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:05.425 15:21:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:05.425 15:21:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:05.425 15:21:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:05.425 15:21:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:05.425 15:21:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:05.425 15:21:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:05.425 15:21:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.425 15:21:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:05.425 15:21:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.425 15:21:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:05.425 "name": "raid_bdev1", 00:14:05.425 "uuid": "6698e43d-090f-4599-9008-568974edc644", 00:14:05.425 "strip_size_kb": 0, 00:14:05.425 "state": "online", 00:14:05.425 "raid_level": "raid1", 00:14:05.425 "superblock": false, 00:14:05.425 "num_base_bdevs": 4, 00:14:05.425 "num_base_bdevs_discovered": 4, 00:14:05.425 "num_base_bdevs_operational": 4, 00:14:05.425 "base_bdevs_list": [ 00:14:05.425 { 00:14:05.425 "name": "BaseBdev1", 00:14:05.425 "uuid": "e4f40bdd-8cc5-5160-9f3f-0ff2e8a95615", 00:14:05.425 "is_configured": true, 00:14:05.425 "data_offset": 0, 00:14:05.425 "data_size": 65536 00:14:05.425 }, 00:14:05.425 { 00:14:05.425 "name": "BaseBdev2", 00:14:05.425 "uuid": "f1c9e8a1-c0b6-5977-98a4-68e37cc41674", 00:14:05.425 "is_configured": true, 00:14:05.425 "data_offset": 0, 00:14:05.425 "data_size": 65536 00:14:05.425 }, 00:14:05.425 { 00:14:05.425 "name": "BaseBdev3", 00:14:05.425 "uuid": "08e7fbaf-214f-5f59-8ee3-29f46e13770e", 00:14:05.425 "is_configured": true, 00:14:05.425 "data_offset": 0, 00:14:05.425 "data_size": 65536 00:14:05.425 }, 00:14:05.425 { 00:14:05.425 "name": "BaseBdev4", 00:14:05.425 "uuid": "8317ff85-f7a5-5168-a146-1a8c0b0b4e92", 00:14:05.425 "is_configured": true, 00:14:05.425 "data_offset": 0, 00:14:05.425 "data_size": 65536 00:14:05.425 } 00:14:05.425 ] 00:14:05.425 }' 00:14:05.425 15:21:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:05.425 15:21:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:05.994 15:21:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:05.994 15:21:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.994 15:21:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:05.994 15:21:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:05.994 [2024-11-20 15:21:52.216765] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:05.994 15:21:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.994 15:21:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:14:05.994 15:21:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:05.994 15:21:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.994 15:21:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:05.994 15:21:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:05.994 15:21:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.994 15:21:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:14:05.994 15:21:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:14:05.994 15:21:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:05.994 15:21:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:05.994 15:21:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.994 15:21:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:05.994 [2024-11-20 15:21:52.312217] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:05.994 15:21:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.994 15:21:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:05.994 15:21:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:05.994 15:21:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:05.994 15:21:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:05.994 15:21:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:05.994 15:21:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:05.994 15:21:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:05.994 15:21:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:05.994 15:21:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:05.994 15:21:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:05.994 15:21:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:05.994 15:21:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:05.995 15:21:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.995 15:21:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:05.995 15:21:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.995 15:21:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:05.995 "name": "raid_bdev1", 00:14:05.995 "uuid": "6698e43d-090f-4599-9008-568974edc644", 00:14:05.995 "strip_size_kb": 0, 00:14:05.995 "state": "online", 00:14:05.995 "raid_level": "raid1", 00:14:05.995 "superblock": false, 00:14:05.995 "num_base_bdevs": 4, 00:14:05.995 "num_base_bdevs_discovered": 3, 00:14:05.995 "num_base_bdevs_operational": 3, 00:14:05.995 "base_bdevs_list": [ 00:14:05.995 { 00:14:05.995 "name": null, 00:14:05.995 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:05.995 "is_configured": false, 00:14:05.995 "data_offset": 0, 00:14:05.995 "data_size": 65536 00:14:05.995 }, 00:14:05.995 { 00:14:05.995 "name": "BaseBdev2", 00:14:05.995 "uuid": "f1c9e8a1-c0b6-5977-98a4-68e37cc41674", 00:14:05.995 "is_configured": true, 00:14:05.995 "data_offset": 0, 00:14:05.995 "data_size": 65536 00:14:05.995 }, 00:14:05.995 { 00:14:05.995 "name": "BaseBdev3", 00:14:05.995 "uuid": "08e7fbaf-214f-5f59-8ee3-29f46e13770e", 00:14:05.995 "is_configured": true, 00:14:05.995 "data_offset": 0, 00:14:05.995 "data_size": 65536 00:14:05.995 }, 00:14:05.995 { 00:14:05.995 "name": "BaseBdev4", 00:14:05.995 "uuid": "8317ff85-f7a5-5168-a146-1a8c0b0b4e92", 00:14:05.995 "is_configured": true, 00:14:05.995 "data_offset": 0, 00:14:05.995 "data_size": 65536 00:14:05.995 } 00:14:05.995 ] 00:14:05.995 }' 00:14:05.995 15:21:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:05.995 15:21:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:05.995 [2024-11-20 15:21:52.432572] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:14:05.995 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:05.995 Zero copy mechanism will not be used. 00:14:05.995 Running I/O for 60 seconds... 00:14:06.564 15:21:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:06.564 15:21:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.564 15:21:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:06.564 [2024-11-20 15:21:52.798546] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:06.564 15:21:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.564 15:21:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:06.564 [2024-11-20 15:21:52.869422] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:14:06.564 [2024-11-20 15:21:52.871791] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:06.564 [2024-11-20 15:21:52.995729] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:06.564 [2024-11-20 15:21:52.997270] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:06.822 [2024-11-20 15:21:53.225286] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:06.822 [2024-11-20 15:21:53.226411] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:07.341 133.00 IOPS, 399.00 MiB/s [2024-11-20T15:21:53.823Z] [2024-11-20 15:21:53.586507] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:14:07.341 [2024-11-20 15:21:53.722632] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:07.341 [2024-11-20 15:21:53.723262] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:07.600 15:21:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:07.600 15:21:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:07.600 15:21:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:07.600 15:21:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:07.600 15:21:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:07.600 15:21:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:07.600 15:21:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:07.600 15:21:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.600 15:21:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:07.601 15:21:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.601 15:21:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:07.601 "name": "raid_bdev1", 00:14:07.601 "uuid": "6698e43d-090f-4599-9008-568974edc644", 00:14:07.601 "strip_size_kb": 0, 00:14:07.601 "state": "online", 00:14:07.601 "raid_level": "raid1", 00:14:07.601 "superblock": false, 00:14:07.601 "num_base_bdevs": 4, 00:14:07.601 "num_base_bdevs_discovered": 4, 00:14:07.601 "num_base_bdevs_operational": 4, 00:14:07.601 "process": { 00:14:07.601 "type": "rebuild", 00:14:07.601 "target": "spare", 00:14:07.601 "progress": { 00:14:07.601 "blocks": 10240, 00:14:07.601 "percent": 15 00:14:07.601 } 00:14:07.601 }, 00:14:07.601 "base_bdevs_list": [ 00:14:07.601 { 00:14:07.601 "name": "spare", 00:14:07.601 "uuid": "ccd85d5a-ff42-50d9-b824-4c2d2dc8c6f4", 00:14:07.601 "is_configured": true, 00:14:07.601 "data_offset": 0, 00:14:07.601 "data_size": 65536 00:14:07.601 }, 00:14:07.601 { 00:14:07.601 "name": "BaseBdev2", 00:14:07.601 "uuid": "f1c9e8a1-c0b6-5977-98a4-68e37cc41674", 00:14:07.601 "is_configured": true, 00:14:07.601 "data_offset": 0, 00:14:07.601 "data_size": 65536 00:14:07.601 }, 00:14:07.601 { 00:14:07.601 "name": "BaseBdev3", 00:14:07.601 "uuid": "08e7fbaf-214f-5f59-8ee3-29f46e13770e", 00:14:07.601 "is_configured": true, 00:14:07.601 "data_offset": 0, 00:14:07.601 "data_size": 65536 00:14:07.601 }, 00:14:07.601 { 00:14:07.601 "name": "BaseBdev4", 00:14:07.601 "uuid": "8317ff85-f7a5-5168-a146-1a8c0b0b4e92", 00:14:07.601 "is_configured": true, 00:14:07.601 "data_offset": 0, 00:14:07.601 "data_size": 65536 00:14:07.601 } 00:14:07.601 ] 00:14:07.601 }' 00:14:07.601 15:21:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:07.601 15:21:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:07.601 15:21:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:07.601 15:21:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:07.601 15:21:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:07.601 15:21:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.601 15:21:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:07.601 [2024-11-20 15:21:53.970684] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:07.601 [2024-11-20 15:21:54.050554] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:14:07.601 [2024-11-20 15:21:54.052037] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:14:07.858 [2024-11-20 15:21:54.161044] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:07.858 [2024-11-20 15:21:54.165191] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:07.858 [2024-11-20 15:21:54.165273] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:07.858 [2024-11-20 15:21:54.165288] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:07.858 [2024-11-20 15:21:54.203899] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:14:07.858 15:21:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.858 15:21:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:07.858 15:21:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:07.858 15:21:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:07.858 15:21:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:07.858 15:21:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:07.858 15:21:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:07.858 15:21:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:07.858 15:21:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:07.858 15:21:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:07.858 15:21:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:07.858 15:21:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:07.858 15:21:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.858 15:21:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:07.858 15:21:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:07.858 15:21:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.858 15:21:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:07.858 "name": "raid_bdev1", 00:14:07.858 "uuid": "6698e43d-090f-4599-9008-568974edc644", 00:14:07.858 "strip_size_kb": 0, 00:14:07.858 "state": "online", 00:14:07.858 "raid_level": "raid1", 00:14:07.858 "superblock": false, 00:14:07.858 "num_base_bdevs": 4, 00:14:07.858 "num_base_bdevs_discovered": 3, 00:14:07.858 "num_base_bdevs_operational": 3, 00:14:07.858 "base_bdevs_list": [ 00:14:07.858 { 00:14:07.858 "name": null, 00:14:07.858 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:07.858 "is_configured": false, 00:14:07.858 "data_offset": 0, 00:14:07.858 "data_size": 65536 00:14:07.858 }, 00:14:07.858 { 00:14:07.858 "name": "BaseBdev2", 00:14:07.858 "uuid": "f1c9e8a1-c0b6-5977-98a4-68e37cc41674", 00:14:07.858 "is_configured": true, 00:14:07.858 "data_offset": 0, 00:14:07.858 "data_size": 65536 00:14:07.858 }, 00:14:07.858 { 00:14:07.858 "name": "BaseBdev3", 00:14:07.858 "uuid": "08e7fbaf-214f-5f59-8ee3-29f46e13770e", 00:14:07.858 "is_configured": true, 00:14:07.858 "data_offset": 0, 00:14:07.858 "data_size": 65536 00:14:07.858 }, 00:14:07.858 { 00:14:07.858 "name": "BaseBdev4", 00:14:07.858 "uuid": "8317ff85-f7a5-5168-a146-1a8c0b0b4e92", 00:14:07.858 "is_configured": true, 00:14:07.858 "data_offset": 0, 00:14:07.858 "data_size": 65536 00:14:07.859 } 00:14:07.859 ] 00:14:07.859 }' 00:14:07.859 15:21:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:07.859 15:21:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:08.375 119.00 IOPS, 357.00 MiB/s [2024-11-20T15:21:54.857Z] 15:21:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:08.375 15:21:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:08.375 15:21:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:08.375 15:21:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:08.375 15:21:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:08.375 15:21:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:08.375 15:21:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:08.375 15:21:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.375 15:21:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:08.375 15:21:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.375 15:21:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:08.375 "name": "raid_bdev1", 00:14:08.375 "uuid": "6698e43d-090f-4599-9008-568974edc644", 00:14:08.375 "strip_size_kb": 0, 00:14:08.375 "state": "online", 00:14:08.375 "raid_level": "raid1", 00:14:08.375 "superblock": false, 00:14:08.375 "num_base_bdevs": 4, 00:14:08.375 "num_base_bdevs_discovered": 3, 00:14:08.375 "num_base_bdevs_operational": 3, 00:14:08.375 "base_bdevs_list": [ 00:14:08.375 { 00:14:08.375 "name": null, 00:14:08.375 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:08.375 "is_configured": false, 00:14:08.375 "data_offset": 0, 00:14:08.375 "data_size": 65536 00:14:08.375 }, 00:14:08.375 { 00:14:08.375 "name": "BaseBdev2", 00:14:08.375 "uuid": "f1c9e8a1-c0b6-5977-98a4-68e37cc41674", 00:14:08.375 "is_configured": true, 00:14:08.375 "data_offset": 0, 00:14:08.375 "data_size": 65536 00:14:08.375 }, 00:14:08.375 { 00:14:08.375 "name": "BaseBdev3", 00:14:08.375 "uuid": "08e7fbaf-214f-5f59-8ee3-29f46e13770e", 00:14:08.375 "is_configured": true, 00:14:08.375 "data_offset": 0, 00:14:08.375 "data_size": 65536 00:14:08.375 }, 00:14:08.375 { 00:14:08.375 "name": "BaseBdev4", 00:14:08.375 "uuid": "8317ff85-f7a5-5168-a146-1a8c0b0b4e92", 00:14:08.375 "is_configured": true, 00:14:08.375 "data_offset": 0, 00:14:08.375 "data_size": 65536 00:14:08.375 } 00:14:08.375 ] 00:14:08.375 }' 00:14:08.375 15:21:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:08.375 15:21:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:08.375 15:21:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:08.375 15:21:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:08.376 15:21:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:08.376 15:21:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.376 15:21:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:08.376 [2024-11-20 15:21:54.789411] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:08.376 15:21:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.376 15:21:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:08.634 [2024-11-20 15:21:54.859191] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:14:08.634 [2024-11-20 15:21:54.861584] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:08.634 [2024-11-20 15:21:54.972232] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:08.634 [2024-11-20 15:21:54.972876] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:08.634 [2024-11-20 15:21:55.081380] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:08.635 [2024-11-20 15:21:55.081940] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:09.202 [2024-11-20 15:21:55.422344] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:14:09.202 129.00 IOPS, 387.00 MiB/s [2024-11-20T15:21:55.684Z] [2024-11-20 15:21:55.534431] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:09.461 [2024-11-20 15:21:55.757153] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:14:09.461 15:21:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:09.461 15:21:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:09.461 15:21:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:09.461 15:21:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:09.461 15:21:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:09.461 15:21:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:09.461 15:21:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.461 15:21:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:09.461 15:21:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:09.461 15:21:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.461 15:21:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:09.461 "name": "raid_bdev1", 00:14:09.461 "uuid": "6698e43d-090f-4599-9008-568974edc644", 00:14:09.461 "strip_size_kb": 0, 00:14:09.461 "state": "online", 00:14:09.461 "raid_level": "raid1", 00:14:09.461 "superblock": false, 00:14:09.461 "num_base_bdevs": 4, 00:14:09.461 "num_base_bdevs_discovered": 4, 00:14:09.461 "num_base_bdevs_operational": 4, 00:14:09.461 "process": { 00:14:09.461 "type": "rebuild", 00:14:09.461 "target": "spare", 00:14:09.461 "progress": { 00:14:09.461 "blocks": 14336, 00:14:09.461 "percent": 21 00:14:09.461 } 00:14:09.461 }, 00:14:09.461 "base_bdevs_list": [ 00:14:09.461 { 00:14:09.461 "name": "spare", 00:14:09.461 "uuid": "ccd85d5a-ff42-50d9-b824-4c2d2dc8c6f4", 00:14:09.461 "is_configured": true, 00:14:09.461 "data_offset": 0, 00:14:09.461 "data_size": 65536 00:14:09.461 }, 00:14:09.461 { 00:14:09.461 "name": "BaseBdev2", 00:14:09.461 "uuid": "f1c9e8a1-c0b6-5977-98a4-68e37cc41674", 00:14:09.461 "is_configured": true, 00:14:09.461 "data_offset": 0, 00:14:09.461 "data_size": 65536 00:14:09.461 }, 00:14:09.461 { 00:14:09.461 "name": "BaseBdev3", 00:14:09.461 "uuid": "08e7fbaf-214f-5f59-8ee3-29f46e13770e", 00:14:09.461 "is_configured": true, 00:14:09.461 "data_offset": 0, 00:14:09.461 "data_size": 65536 00:14:09.461 }, 00:14:09.461 { 00:14:09.461 "name": "BaseBdev4", 00:14:09.461 "uuid": "8317ff85-f7a5-5168-a146-1a8c0b0b4e92", 00:14:09.461 "is_configured": true, 00:14:09.461 "data_offset": 0, 00:14:09.461 "data_size": 65536 00:14:09.461 } 00:14:09.461 ] 00:14:09.461 }' 00:14:09.461 15:21:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:09.461 15:21:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:09.720 15:21:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:09.720 [2024-11-20 15:21:55.975776] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:14:09.720 [2024-11-20 15:21:55.976519] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:14:09.720 15:21:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:09.720 15:21:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:14:09.720 15:21:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:14:09.720 15:21:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:14:09.721 15:21:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:14:09.721 15:21:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:09.721 15:21:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.721 15:21:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:09.721 [2024-11-20 15:21:55.995931] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:09.980 [2024-11-20 15:21:56.310348] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:14:09.980 [2024-11-20 15:21:56.310628] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:14:09.980 15:21:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.980 15:21:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:14:09.980 15:21:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:14:09.980 15:21:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:09.980 15:21:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:09.980 15:21:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:09.980 15:21:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:09.980 15:21:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:09.980 15:21:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:09.980 15:21:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.980 15:21:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:09.980 15:21:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:09.980 15:21:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.980 15:21:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:09.980 "name": "raid_bdev1", 00:14:09.980 "uuid": "6698e43d-090f-4599-9008-568974edc644", 00:14:09.980 "strip_size_kb": 0, 00:14:09.980 "state": "online", 00:14:09.980 "raid_level": "raid1", 00:14:09.980 "superblock": false, 00:14:09.980 "num_base_bdevs": 4, 00:14:09.980 "num_base_bdevs_discovered": 3, 00:14:09.980 "num_base_bdevs_operational": 3, 00:14:09.980 "process": { 00:14:09.980 "type": "rebuild", 00:14:09.980 "target": "spare", 00:14:09.980 "progress": { 00:14:09.980 "blocks": 18432, 00:14:09.980 "percent": 28 00:14:09.980 } 00:14:09.980 }, 00:14:09.980 "base_bdevs_list": [ 00:14:09.980 { 00:14:09.980 "name": "spare", 00:14:09.980 "uuid": "ccd85d5a-ff42-50d9-b824-4c2d2dc8c6f4", 00:14:09.980 "is_configured": true, 00:14:09.980 "data_offset": 0, 00:14:09.980 "data_size": 65536 00:14:09.980 }, 00:14:09.980 { 00:14:09.980 "name": null, 00:14:09.980 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:09.980 "is_configured": false, 00:14:09.980 "data_offset": 0, 00:14:09.980 "data_size": 65536 00:14:09.980 }, 00:14:09.980 { 00:14:09.980 "name": "BaseBdev3", 00:14:09.980 "uuid": "08e7fbaf-214f-5f59-8ee3-29f46e13770e", 00:14:09.980 "is_configured": true, 00:14:09.980 "data_offset": 0, 00:14:09.980 "data_size": 65536 00:14:09.980 }, 00:14:09.980 { 00:14:09.980 "name": "BaseBdev4", 00:14:09.980 "uuid": "8317ff85-f7a5-5168-a146-1a8c0b0b4e92", 00:14:09.980 "is_configured": true, 00:14:09.980 "data_offset": 0, 00:14:09.980 "data_size": 65536 00:14:09.980 } 00:14:09.980 ] 00:14:09.980 }' 00:14:09.980 15:21:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:09.980 15:21:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:09.980 15:21:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:09.980 112.00 IOPS, 336.00 MiB/s [2024-11-20T15:21:56.462Z] [2024-11-20 15:21:56.444809] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:14:10.240 15:21:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:10.240 15:21:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=480 00:14:10.240 15:21:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:10.240 15:21:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:10.240 15:21:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:10.240 15:21:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:10.240 15:21:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:10.240 15:21:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:10.240 15:21:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:10.240 15:21:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:10.240 15:21:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.240 15:21:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:10.240 15:21:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.240 15:21:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:10.240 "name": "raid_bdev1", 00:14:10.240 "uuid": "6698e43d-090f-4599-9008-568974edc644", 00:14:10.240 "strip_size_kb": 0, 00:14:10.240 "state": "online", 00:14:10.240 "raid_level": "raid1", 00:14:10.240 "superblock": false, 00:14:10.240 "num_base_bdevs": 4, 00:14:10.240 "num_base_bdevs_discovered": 3, 00:14:10.240 "num_base_bdevs_operational": 3, 00:14:10.240 "process": { 00:14:10.240 "type": "rebuild", 00:14:10.240 "target": "spare", 00:14:10.240 "progress": { 00:14:10.240 "blocks": 20480, 00:14:10.240 "percent": 31 00:14:10.240 } 00:14:10.240 }, 00:14:10.240 "base_bdevs_list": [ 00:14:10.240 { 00:14:10.240 "name": "spare", 00:14:10.240 "uuid": "ccd85d5a-ff42-50d9-b824-4c2d2dc8c6f4", 00:14:10.240 "is_configured": true, 00:14:10.240 "data_offset": 0, 00:14:10.240 "data_size": 65536 00:14:10.240 }, 00:14:10.240 { 00:14:10.240 "name": null, 00:14:10.240 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:10.240 "is_configured": false, 00:14:10.240 "data_offset": 0, 00:14:10.240 "data_size": 65536 00:14:10.240 }, 00:14:10.240 { 00:14:10.240 "name": "BaseBdev3", 00:14:10.240 "uuid": "08e7fbaf-214f-5f59-8ee3-29f46e13770e", 00:14:10.240 "is_configured": true, 00:14:10.240 "data_offset": 0, 00:14:10.240 "data_size": 65536 00:14:10.240 }, 00:14:10.240 { 00:14:10.240 "name": "BaseBdev4", 00:14:10.240 "uuid": "8317ff85-f7a5-5168-a146-1a8c0b0b4e92", 00:14:10.240 "is_configured": true, 00:14:10.240 "data_offset": 0, 00:14:10.240 "data_size": 65536 00:14:10.240 } 00:14:10.240 ] 00:14:10.240 }' 00:14:10.240 15:21:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:10.240 15:21:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:10.240 15:21:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:10.240 [2024-11-20 15:21:56.568803] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:14:10.240 15:21:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:10.240 15:21:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:10.499 [2024-11-20 15:21:56.809206] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:14:10.757 [2024-11-20 15:21:57.119567] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:14:10.757 [2024-11-20 15:21:57.120175] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:14:10.757 [2024-11-20 15:21:57.231816] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:14:11.274 104.40 IOPS, 313.20 MiB/s [2024-11-20T15:21:57.756Z] [2024-11-20 15:21:57.557670] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:14:11.274 [2024-11-20 15:21:57.558788] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:14:11.274 15:21:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:11.274 15:21:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:11.274 15:21:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:11.274 15:21:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:11.274 15:21:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:11.274 15:21:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:11.274 15:21:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:11.274 15:21:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:11.274 15:21:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.274 15:21:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:11.274 15:21:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.274 15:21:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:11.274 "name": "raid_bdev1", 00:14:11.274 "uuid": "6698e43d-090f-4599-9008-568974edc644", 00:14:11.274 "strip_size_kb": 0, 00:14:11.274 "state": "online", 00:14:11.274 "raid_level": "raid1", 00:14:11.274 "superblock": false, 00:14:11.274 "num_base_bdevs": 4, 00:14:11.274 "num_base_bdevs_discovered": 3, 00:14:11.274 "num_base_bdevs_operational": 3, 00:14:11.274 "process": { 00:14:11.274 "type": "rebuild", 00:14:11.274 "target": "spare", 00:14:11.274 "progress": { 00:14:11.274 "blocks": 38912, 00:14:11.274 "percent": 59 00:14:11.274 } 00:14:11.274 }, 00:14:11.274 "base_bdevs_list": [ 00:14:11.274 { 00:14:11.274 "name": "spare", 00:14:11.274 "uuid": "ccd85d5a-ff42-50d9-b824-4c2d2dc8c6f4", 00:14:11.274 "is_configured": true, 00:14:11.274 "data_offset": 0, 00:14:11.274 "data_size": 65536 00:14:11.274 }, 00:14:11.274 { 00:14:11.274 "name": null, 00:14:11.274 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:11.274 "is_configured": false, 00:14:11.274 "data_offset": 0, 00:14:11.274 "data_size": 65536 00:14:11.274 }, 00:14:11.274 { 00:14:11.274 "name": "BaseBdev3", 00:14:11.274 "uuid": "08e7fbaf-214f-5f59-8ee3-29f46e13770e", 00:14:11.274 "is_configured": true, 00:14:11.274 "data_offset": 0, 00:14:11.274 "data_size": 65536 00:14:11.274 }, 00:14:11.275 { 00:14:11.275 "name": "BaseBdev4", 00:14:11.275 "uuid": "8317ff85-f7a5-5168-a146-1a8c0b0b4e92", 00:14:11.275 "is_configured": true, 00:14:11.275 "data_offset": 0, 00:14:11.275 "data_size": 65536 00:14:11.275 } 00:14:11.275 ] 00:14:11.275 }' 00:14:11.275 15:21:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:11.275 15:21:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:11.275 15:21:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:11.533 15:21:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:11.533 15:21:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:11.533 [2024-11-20 15:21:57.760663] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:14:12.102 [2024-11-20 15:21:58.287284] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:14:12.362 93.67 IOPS, 281.00 MiB/s [2024-11-20T15:21:58.844Z] 15:21:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:12.362 15:21:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:12.362 15:21:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:12.362 15:21:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:12.362 15:21:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:12.362 15:21:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:12.362 15:21:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:12.362 15:21:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.362 15:21:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:12.362 15:21:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:12.362 15:21:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.362 15:21:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:12.362 "name": "raid_bdev1", 00:14:12.362 "uuid": "6698e43d-090f-4599-9008-568974edc644", 00:14:12.362 "strip_size_kb": 0, 00:14:12.362 "state": "online", 00:14:12.362 "raid_level": "raid1", 00:14:12.362 "superblock": false, 00:14:12.362 "num_base_bdevs": 4, 00:14:12.362 "num_base_bdevs_discovered": 3, 00:14:12.362 "num_base_bdevs_operational": 3, 00:14:12.362 "process": { 00:14:12.362 "type": "rebuild", 00:14:12.362 "target": "spare", 00:14:12.362 "progress": { 00:14:12.362 "blocks": 57344, 00:14:12.362 "percent": 87 00:14:12.362 } 00:14:12.362 }, 00:14:12.362 "base_bdevs_list": [ 00:14:12.362 { 00:14:12.362 "name": "spare", 00:14:12.362 "uuid": "ccd85d5a-ff42-50d9-b824-4c2d2dc8c6f4", 00:14:12.362 "is_configured": true, 00:14:12.362 "data_offset": 0, 00:14:12.362 "data_size": 65536 00:14:12.362 }, 00:14:12.362 { 00:14:12.362 "name": null, 00:14:12.362 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:12.362 "is_configured": false, 00:14:12.362 "data_offset": 0, 00:14:12.362 "data_size": 65536 00:14:12.362 }, 00:14:12.362 { 00:14:12.362 "name": "BaseBdev3", 00:14:12.362 "uuid": "08e7fbaf-214f-5f59-8ee3-29f46e13770e", 00:14:12.362 "is_configured": true, 00:14:12.362 "data_offset": 0, 00:14:12.362 "data_size": 65536 00:14:12.362 }, 00:14:12.362 { 00:14:12.362 "name": "BaseBdev4", 00:14:12.362 "uuid": "8317ff85-f7a5-5168-a146-1a8c0b0b4e92", 00:14:12.362 "is_configured": true, 00:14:12.362 "data_offset": 0, 00:14:12.362 "data_size": 65536 00:14:12.362 } 00:14:12.362 ] 00:14:12.362 }' 00:14:12.362 15:21:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:12.622 15:21:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:12.622 15:21:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:12.622 15:21:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:12.622 15:21:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:12.882 [2024-11-20 15:21:59.173285] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:12.882 [2024-11-20 15:21:59.273042] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:12.882 [2024-11-20 15:21:59.276097] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:13.709 85.00 IOPS, 255.00 MiB/s [2024-11-20T15:22:00.191Z] 15:21:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:13.709 15:21:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:13.709 15:21:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:13.709 15:21:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:13.709 15:21:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:13.709 15:21:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:13.709 15:21:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:13.709 15:21:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.709 15:21:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:13.709 15:21:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:13.709 15:21:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.709 15:21:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:13.709 "name": "raid_bdev1", 00:14:13.709 "uuid": "6698e43d-090f-4599-9008-568974edc644", 00:14:13.709 "strip_size_kb": 0, 00:14:13.709 "state": "online", 00:14:13.709 "raid_level": "raid1", 00:14:13.709 "superblock": false, 00:14:13.709 "num_base_bdevs": 4, 00:14:13.709 "num_base_bdevs_discovered": 3, 00:14:13.709 "num_base_bdevs_operational": 3, 00:14:13.709 "base_bdevs_list": [ 00:14:13.709 { 00:14:13.709 "name": "spare", 00:14:13.709 "uuid": "ccd85d5a-ff42-50d9-b824-4c2d2dc8c6f4", 00:14:13.709 "is_configured": true, 00:14:13.709 "data_offset": 0, 00:14:13.709 "data_size": 65536 00:14:13.709 }, 00:14:13.709 { 00:14:13.709 "name": null, 00:14:13.709 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:13.709 "is_configured": false, 00:14:13.709 "data_offset": 0, 00:14:13.709 "data_size": 65536 00:14:13.709 }, 00:14:13.709 { 00:14:13.709 "name": "BaseBdev3", 00:14:13.709 "uuid": "08e7fbaf-214f-5f59-8ee3-29f46e13770e", 00:14:13.709 "is_configured": true, 00:14:13.709 "data_offset": 0, 00:14:13.709 "data_size": 65536 00:14:13.709 }, 00:14:13.709 { 00:14:13.709 "name": "BaseBdev4", 00:14:13.709 "uuid": "8317ff85-f7a5-5168-a146-1a8c0b0b4e92", 00:14:13.709 "is_configured": true, 00:14:13.709 "data_offset": 0, 00:14:13.709 "data_size": 65536 00:14:13.709 } 00:14:13.709 ] 00:14:13.709 }' 00:14:13.709 15:21:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:13.709 15:21:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:13.709 15:21:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:13.709 15:22:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:13.709 15:22:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:14:13.709 15:22:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:13.709 15:22:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:13.709 15:22:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:13.709 15:22:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:13.709 15:22:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:13.709 15:22:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:13.709 15:22:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:13.709 15:22:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.709 15:22:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:13.709 15:22:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.709 15:22:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:13.709 "name": "raid_bdev1", 00:14:13.709 "uuid": "6698e43d-090f-4599-9008-568974edc644", 00:14:13.709 "strip_size_kb": 0, 00:14:13.709 "state": "online", 00:14:13.709 "raid_level": "raid1", 00:14:13.709 "superblock": false, 00:14:13.709 "num_base_bdevs": 4, 00:14:13.709 "num_base_bdevs_discovered": 3, 00:14:13.709 "num_base_bdevs_operational": 3, 00:14:13.709 "base_bdevs_list": [ 00:14:13.709 { 00:14:13.709 "name": "spare", 00:14:13.709 "uuid": "ccd85d5a-ff42-50d9-b824-4c2d2dc8c6f4", 00:14:13.709 "is_configured": true, 00:14:13.709 "data_offset": 0, 00:14:13.709 "data_size": 65536 00:14:13.709 }, 00:14:13.709 { 00:14:13.709 "name": null, 00:14:13.709 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:13.709 "is_configured": false, 00:14:13.710 "data_offset": 0, 00:14:13.710 "data_size": 65536 00:14:13.710 }, 00:14:13.710 { 00:14:13.710 "name": "BaseBdev3", 00:14:13.710 "uuid": "08e7fbaf-214f-5f59-8ee3-29f46e13770e", 00:14:13.710 "is_configured": true, 00:14:13.710 "data_offset": 0, 00:14:13.710 "data_size": 65536 00:14:13.710 }, 00:14:13.710 { 00:14:13.710 "name": "BaseBdev4", 00:14:13.710 "uuid": "8317ff85-f7a5-5168-a146-1a8c0b0b4e92", 00:14:13.710 "is_configured": true, 00:14:13.710 "data_offset": 0, 00:14:13.710 "data_size": 65536 00:14:13.710 } 00:14:13.710 ] 00:14:13.710 }' 00:14:13.710 15:22:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:13.710 15:22:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:13.710 15:22:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:13.710 15:22:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:13.710 15:22:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:13.710 15:22:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:13.710 15:22:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:13.710 15:22:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:13.710 15:22:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:13.710 15:22:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:13.710 15:22:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:13.710 15:22:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:13.710 15:22:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:13.710 15:22:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:13.710 15:22:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:13.710 15:22:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:13.710 15:22:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.710 15:22:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:13.969 15:22:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.969 15:22:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:13.969 "name": "raid_bdev1", 00:14:13.969 "uuid": "6698e43d-090f-4599-9008-568974edc644", 00:14:13.969 "strip_size_kb": 0, 00:14:13.969 "state": "online", 00:14:13.969 "raid_level": "raid1", 00:14:13.969 "superblock": false, 00:14:13.969 "num_base_bdevs": 4, 00:14:13.969 "num_base_bdevs_discovered": 3, 00:14:13.969 "num_base_bdevs_operational": 3, 00:14:13.969 "base_bdevs_list": [ 00:14:13.969 { 00:14:13.969 "name": "spare", 00:14:13.969 "uuid": "ccd85d5a-ff42-50d9-b824-4c2d2dc8c6f4", 00:14:13.969 "is_configured": true, 00:14:13.969 "data_offset": 0, 00:14:13.969 "data_size": 65536 00:14:13.969 }, 00:14:13.969 { 00:14:13.969 "name": null, 00:14:13.969 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:13.969 "is_configured": false, 00:14:13.969 "data_offset": 0, 00:14:13.969 "data_size": 65536 00:14:13.969 }, 00:14:13.969 { 00:14:13.969 "name": "BaseBdev3", 00:14:13.969 "uuid": "08e7fbaf-214f-5f59-8ee3-29f46e13770e", 00:14:13.969 "is_configured": true, 00:14:13.969 "data_offset": 0, 00:14:13.969 "data_size": 65536 00:14:13.969 }, 00:14:13.969 { 00:14:13.969 "name": "BaseBdev4", 00:14:13.969 "uuid": "8317ff85-f7a5-5168-a146-1a8c0b0b4e92", 00:14:13.969 "is_configured": true, 00:14:13.969 "data_offset": 0, 00:14:13.969 "data_size": 65536 00:14:13.969 } 00:14:13.969 ] 00:14:13.969 }' 00:14:13.969 15:22:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:13.969 15:22:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:14.228 79.50 IOPS, 238.50 MiB/s [2024-11-20T15:22:00.710Z] 15:22:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:14.228 15:22:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.228 15:22:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:14.228 [2024-11-20 15:22:00.629541] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:14.228 [2024-11-20 15:22:00.629587] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:14.228 00:14:14.228 Latency(us) 00:14:14.228 [2024-11-20T15:22:00.710Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:14.228 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:14:14.228 raid_bdev1 : 8.27 78.38 235.14 0.00 0.00 17477.94 348.74 112016.55 00:14:14.228 [2024-11-20T15:22:00.710Z] =================================================================================================================== 00:14:14.228 [2024-11-20T15:22:00.710Z] Total : 78.38 235.14 0.00 0.00 17477.94 348.74 112016.55 00:14:14.488 [2024-11-20 15:22:00.713470] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:14.488 [2024-11-20 15:22:00.713567] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:14.488 [2024-11-20 15:22:00.713697] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:14.488 [2024-11-20 15:22:00.713715] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:14.488 { 00:14:14.488 "results": [ 00:14:14.488 { 00:14:14.488 "job": "raid_bdev1", 00:14:14.488 "core_mask": "0x1", 00:14:14.488 "workload": "randrw", 00:14:14.488 "percentage": 50, 00:14:14.488 "status": "finished", 00:14:14.488 "queue_depth": 2, 00:14:14.488 "io_size": 3145728, 00:14:14.488 "runtime": 8.26745, 00:14:14.488 "iops": 78.37966966839836, 00:14:14.488 "mibps": 235.13900900519508, 00:14:14.488 "io_failed": 0, 00:14:14.488 "io_timeout": 0, 00:14:14.488 "avg_latency_us": 17477.94464772671, 00:14:14.488 "min_latency_us": 348.73574297188753, 00:14:14.488 "max_latency_us": 112016.55261044177 00:14:14.488 } 00:14:14.488 ], 00:14:14.488 "core_count": 1 00:14:14.488 } 00:14:14.488 15:22:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.488 15:22:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:14.488 15:22:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:14:14.488 15:22:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.488 15:22:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:14.488 15:22:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.488 15:22:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:14.488 15:22:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:14.488 15:22:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:14:14.488 15:22:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:14:14.488 15:22:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:14.488 15:22:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:14:14.488 15:22:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:14.488 15:22:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:14.488 15:22:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:14.488 15:22:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:14:14.488 15:22:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:14.488 15:22:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:14.488 15:22:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:14:14.747 /dev/nbd0 00:14:14.747 15:22:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:14.747 15:22:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:14.747 15:22:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:14.747 15:22:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:14:14.747 15:22:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:14.747 15:22:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:14.747 15:22:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:14.747 15:22:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:14:14.747 15:22:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:14.747 15:22:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:14.747 15:22:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:14.747 1+0 records in 00:14:14.747 1+0 records out 00:14:14.747 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000452483 s, 9.1 MB/s 00:14:14.747 15:22:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:14.747 15:22:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:14:14.747 15:22:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:14.747 15:22:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:14.747 15:22:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:14:14.747 15:22:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:14.747 15:22:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:14.747 15:22:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:14.747 15:22:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:14:14.747 15:22:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@728 -- # continue 00:14:14.747 15:22:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:14.747 15:22:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:14:14.747 15:22:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:14:14.747 15:22:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:14.747 15:22:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:14:14.747 15:22:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:14.747 15:22:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:14:14.747 15:22:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:14.747 15:22:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:14:14.747 15:22:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:14.747 15:22:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:14.747 15:22:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:14:15.007 /dev/nbd1 00:14:15.007 15:22:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:15.007 15:22:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:15.007 15:22:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:14:15.007 15:22:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:14:15.007 15:22:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:15.007 15:22:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:15.007 15:22:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:14:15.007 15:22:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:14:15.007 15:22:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:15.007 15:22:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:15.007 15:22:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:15.007 1+0 records in 00:14:15.007 1+0 records out 00:14:15.007 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000428496 s, 9.6 MB/s 00:14:15.007 15:22:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:15.007 15:22:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:14:15.007 15:22:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:15.007 15:22:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:15.007 15:22:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:14:15.007 15:22:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:15.007 15:22:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:15.007 15:22:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:14:15.311 15:22:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:14:15.311 15:22:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:15.311 15:22:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:14:15.311 15:22:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:15.311 15:22:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:14:15.311 15:22:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:15.311 15:22:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:15.311 15:22:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:15.311 15:22:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:15.311 15:22:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:15.311 15:22:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:15.311 15:22:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:15.311 15:22:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:15.311 15:22:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:14:15.311 15:22:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:15.311 15:22:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:15.311 15:22:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:14:15.311 15:22:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:14:15.311 15:22:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:15.311 15:22:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:14:15.311 15:22:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:15.311 15:22:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:14:15.311 15:22:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:15.311 15:22:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:14:15.311 15:22:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:15.311 15:22:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:15.311 15:22:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:14:15.569 /dev/nbd1 00:14:15.569 15:22:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:15.569 15:22:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:15.569 15:22:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:14:15.569 15:22:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:14:15.569 15:22:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:15.569 15:22:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:15.569 15:22:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:14:15.570 15:22:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:14:15.570 15:22:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:15.570 15:22:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:15.570 15:22:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:15.570 1+0 records in 00:14:15.570 1+0 records out 00:14:15.570 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000320267 s, 12.8 MB/s 00:14:15.570 15:22:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:15.570 15:22:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:14:15.570 15:22:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:15.570 15:22:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:15.570 15:22:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:14:15.570 15:22:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:15.570 15:22:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:15.570 15:22:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:14:15.828 15:22:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:14:15.828 15:22:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:15.828 15:22:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:14:15.828 15:22:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:15.828 15:22:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:14:15.828 15:22:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:15.828 15:22:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:16.088 15:22:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:16.088 15:22:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:16.088 15:22:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:16.088 15:22:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:16.088 15:22:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:16.088 15:22:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:16.088 15:22:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:14:16.088 15:22:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:16.088 15:22:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:16.088 15:22:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:16.088 15:22:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:16.088 15:22:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:16.088 15:22:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:14:16.088 15:22:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:16.088 15:22:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:16.088 15:22:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:16.348 15:22:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:16.348 15:22:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:16.348 15:22:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:16.348 15:22:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:16.348 15:22:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:16.348 15:22:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:14:16.348 15:22:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:16.348 15:22:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:14:16.348 15:22:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 78571 00:14:16.348 15:22:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # '[' -z 78571 ']' 00:14:16.348 15:22:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # kill -0 78571 00:14:16.348 15:22:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # uname 00:14:16.348 15:22:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:16.348 15:22:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78571 00:14:16.348 15:22:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:16.348 15:22:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:16.348 killing process with pid 78571 00:14:16.348 15:22:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78571' 00:14:16.348 Received shutdown signal, test time was about 10.208259 seconds 00:14:16.348 00:14:16.348 Latency(us) 00:14:16.348 [2024-11-20T15:22:02.830Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:16.348 [2024-11-20T15:22:02.830Z] =================================================================================================================== 00:14:16.348 [2024-11-20T15:22:02.830Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:16.348 15:22:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@973 -- # kill 78571 00:14:16.348 [2024-11-20 15:22:02.627044] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:16.348 15:22:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@978 -- # wait 78571 00:14:16.607 [2024-11-20 15:22:03.056697] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:17.987 15:22:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:14:17.987 00:14:17.987 real 0m13.892s 00:14:17.987 user 0m17.409s 00:14:17.987 sys 0m2.167s 00:14:17.987 15:22:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:17.987 15:22:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:17.987 ************************************ 00:14:17.987 END TEST raid_rebuild_test_io 00:14:17.987 ************************************ 00:14:17.987 15:22:04 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 4 true true true 00:14:17.987 15:22:04 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:14:17.987 15:22:04 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:17.987 15:22:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:17.987 ************************************ 00:14:17.987 START TEST raid_rebuild_test_sb_io 00:14:17.987 ************************************ 00:14:17.987 15:22:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 true true true 00:14:17.987 15:22:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:14:17.987 15:22:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:14:17.987 15:22:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:14:17.987 15:22:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:14:17.987 15:22:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:17.987 15:22:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:17.987 15:22:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:17.987 15:22:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:17.987 15:22:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:17.987 15:22:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:17.987 15:22:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:17.987 15:22:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:17.987 15:22:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:17.987 15:22:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:14:17.987 15:22:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:17.987 15:22:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:17.987 15:22:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:14:17.987 15:22:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:17.987 15:22:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:17.987 15:22:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:17.987 15:22:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:17.987 15:22:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:17.987 15:22:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:17.987 15:22:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:17.987 15:22:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:17.987 15:22:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:17.987 15:22:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:14:17.987 15:22:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:14:17.987 15:22:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:14:17.987 15:22:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:14:17.987 15:22:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=78984 00:14:17.987 15:22:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 78984 00:14:17.987 15:22:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:17.987 15:22:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # '[' -z 78984 ']' 00:14:17.987 15:22:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:17.987 15:22:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:17.987 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:17.987 15:22:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:17.987 15:22:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:17.987 15:22:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:17.987 [2024-11-20 15:22:04.466771] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:14:17.987 [2024-11-20 15:22:04.466909] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78984 ] 00:14:17.987 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:17.987 Zero copy mechanism will not be used. 00:14:18.246 [2024-11-20 15:22:04.650750] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:18.505 [2024-11-20 15:22:04.772752] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:18.764 [2024-11-20 15:22:04.988176] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:18.764 [2024-11-20 15:22:04.988252] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:19.023 15:22:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:19.023 15:22:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # return 0 00:14:19.023 15:22:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:19.023 15:22:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:19.023 15:22:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.023 15:22:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:19.023 BaseBdev1_malloc 00:14:19.023 15:22:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.023 15:22:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:19.023 15:22:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.023 15:22:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:19.023 [2024-11-20 15:22:05.351989] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:19.023 [2024-11-20 15:22:05.352070] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:19.023 [2024-11-20 15:22:05.352095] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:19.023 [2024-11-20 15:22:05.352110] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:19.024 [2024-11-20 15:22:05.354676] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:19.024 [2024-11-20 15:22:05.354751] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:19.024 BaseBdev1 00:14:19.024 15:22:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.024 15:22:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:19.024 15:22:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:19.024 15:22:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.024 15:22:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:19.024 BaseBdev2_malloc 00:14:19.024 15:22:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.024 15:22:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:19.024 15:22:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.024 15:22:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:19.024 [2024-11-20 15:22:05.408581] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:19.024 [2024-11-20 15:22:05.408687] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:19.024 [2024-11-20 15:22:05.408715] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:19.024 [2024-11-20 15:22:05.408731] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:19.024 [2024-11-20 15:22:05.411363] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:19.024 [2024-11-20 15:22:05.411418] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:19.024 BaseBdev2 00:14:19.024 15:22:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.024 15:22:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:19.024 15:22:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:19.024 15:22:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.024 15:22:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:19.024 BaseBdev3_malloc 00:14:19.024 15:22:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.024 15:22:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:14:19.024 15:22:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.024 15:22:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:19.024 [2024-11-20 15:22:05.486147] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:14:19.024 [2024-11-20 15:22:05.486217] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:19.024 [2024-11-20 15:22:05.486241] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:19.024 [2024-11-20 15:22:05.486256] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:19.024 [2024-11-20 15:22:05.488806] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:19.024 [2024-11-20 15:22:05.488852] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:19.024 BaseBdev3 00:14:19.024 15:22:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.024 15:22:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:19.024 15:22:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:14:19.024 15:22:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.024 15:22:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:19.284 BaseBdev4_malloc 00:14:19.284 15:22:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.284 15:22:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:14:19.284 15:22:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.284 15:22:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:19.284 [2024-11-20 15:22:05.544002] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:14:19.284 [2024-11-20 15:22:05.544083] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:19.284 [2024-11-20 15:22:05.544107] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:14:19.284 [2024-11-20 15:22:05.544122] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:19.284 [2024-11-20 15:22:05.546560] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:19.284 [2024-11-20 15:22:05.546609] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:14:19.284 BaseBdev4 00:14:19.284 15:22:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.284 15:22:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:19.284 15:22:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.284 15:22:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:19.284 spare_malloc 00:14:19.284 15:22:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.284 15:22:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:19.284 15:22:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.284 15:22:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:19.284 spare_delay 00:14:19.284 15:22:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.284 15:22:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:19.284 15:22:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.284 15:22:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:19.284 [2024-11-20 15:22:05.601333] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:19.284 [2024-11-20 15:22:05.601395] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:19.284 [2024-11-20 15:22:05.601417] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:14:19.284 [2024-11-20 15:22:05.601431] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:19.284 [2024-11-20 15:22:05.603929] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:19.284 [2024-11-20 15:22:05.603973] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:19.284 spare 00:14:19.284 15:22:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.284 15:22:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:14:19.284 15:22:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.284 15:22:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:19.284 [2024-11-20 15:22:05.609372] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:19.284 [2024-11-20 15:22:05.611580] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:19.284 [2024-11-20 15:22:05.611667] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:19.284 [2024-11-20 15:22:05.611724] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:19.284 [2024-11-20 15:22:05.611951] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:19.284 [2024-11-20 15:22:05.611966] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:19.284 [2024-11-20 15:22:05.612249] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:14:19.284 [2024-11-20 15:22:05.612458] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:19.284 [2024-11-20 15:22:05.612476] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:19.284 [2024-11-20 15:22:05.612652] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:19.284 15:22:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.284 15:22:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:14:19.285 15:22:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:19.285 15:22:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:19.285 15:22:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:19.285 15:22:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:19.285 15:22:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:19.285 15:22:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:19.285 15:22:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:19.285 15:22:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:19.285 15:22:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:19.285 15:22:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:19.285 15:22:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:19.285 15:22:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.285 15:22:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:19.285 15:22:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.285 15:22:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:19.285 "name": "raid_bdev1", 00:14:19.285 "uuid": "447096cd-a629-4417-ac05-63744e3cd115", 00:14:19.285 "strip_size_kb": 0, 00:14:19.285 "state": "online", 00:14:19.285 "raid_level": "raid1", 00:14:19.285 "superblock": true, 00:14:19.285 "num_base_bdevs": 4, 00:14:19.285 "num_base_bdevs_discovered": 4, 00:14:19.285 "num_base_bdevs_operational": 4, 00:14:19.285 "base_bdevs_list": [ 00:14:19.285 { 00:14:19.285 "name": "BaseBdev1", 00:14:19.285 "uuid": "1071cb3b-f759-518d-a73c-25faeec8d277", 00:14:19.285 "is_configured": true, 00:14:19.285 "data_offset": 2048, 00:14:19.285 "data_size": 63488 00:14:19.285 }, 00:14:19.285 { 00:14:19.285 "name": "BaseBdev2", 00:14:19.285 "uuid": "fb33f203-b41a-5f64-ae7e-d8bbe5869bbb", 00:14:19.285 "is_configured": true, 00:14:19.285 "data_offset": 2048, 00:14:19.285 "data_size": 63488 00:14:19.285 }, 00:14:19.285 { 00:14:19.285 "name": "BaseBdev3", 00:14:19.285 "uuid": "77e4c29c-56d8-5f58-9526-3ceb892b92f6", 00:14:19.285 "is_configured": true, 00:14:19.285 "data_offset": 2048, 00:14:19.285 "data_size": 63488 00:14:19.285 }, 00:14:19.285 { 00:14:19.285 "name": "BaseBdev4", 00:14:19.285 "uuid": "e0965463-58d6-571b-9944-f188fa7fdbef", 00:14:19.285 "is_configured": true, 00:14:19.285 "data_offset": 2048, 00:14:19.285 "data_size": 63488 00:14:19.285 } 00:14:19.285 ] 00:14:19.285 }' 00:14:19.285 15:22:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:19.285 15:22:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:19.891 15:22:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:19.891 15:22:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:19.891 15:22:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.891 15:22:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:19.891 [2024-11-20 15:22:06.077082] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:19.891 15:22:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.891 15:22:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:14:19.891 15:22:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:19.891 15:22:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:19.891 15:22:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.891 15:22:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:19.891 15:22:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.891 15:22:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:14:19.891 15:22:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:14:19.891 15:22:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:19.891 15:22:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:19.891 15:22:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.891 15:22:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:19.891 [2024-11-20 15:22:06.172556] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:19.891 15:22:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.891 15:22:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:19.891 15:22:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:19.891 15:22:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:19.891 15:22:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:19.891 15:22:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:19.891 15:22:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:19.891 15:22:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:19.891 15:22:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:19.891 15:22:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:19.891 15:22:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:19.891 15:22:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:19.891 15:22:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:19.891 15:22:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.891 15:22:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:19.891 15:22:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.891 15:22:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:19.891 "name": "raid_bdev1", 00:14:19.891 "uuid": "447096cd-a629-4417-ac05-63744e3cd115", 00:14:19.891 "strip_size_kb": 0, 00:14:19.891 "state": "online", 00:14:19.891 "raid_level": "raid1", 00:14:19.891 "superblock": true, 00:14:19.891 "num_base_bdevs": 4, 00:14:19.891 "num_base_bdevs_discovered": 3, 00:14:19.891 "num_base_bdevs_operational": 3, 00:14:19.891 "base_bdevs_list": [ 00:14:19.891 { 00:14:19.891 "name": null, 00:14:19.891 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:19.891 "is_configured": false, 00:14:19.891 "data_offset": 0, 00:14:19.891 "data_size": 63488 00:14:19.891 }, 00:14:19.891 { 00:14:19.891 "name": "BaseBdev2", 00:14:19.891 "uuid": "fb33f203-b41a-5f64-ae7e-d8bbe5869bbb", 00:14:19.891 "is_configured": true, 00:14:19.891 "data_offset": 2048, 00:14:19.891 "data_size": 63488 00:14:19.892 }, 00:14:19.892 { 00:14:19.892 "name": "BaseBdev3", 00:14:19.892 "uuid": "77e4c29c-56d8-5f58-9526-3ceb892b92f6", 00:14:19.892 "is_configured": true, 00:14:19.892 "data_offset": 2048, 00:14:19.892 "data_size": 63488 00:14:19.892 }, 00:14:19.892 { 00:14:19.892 "name": "BaseBdev4", 00:14:19.892 "uuid": "e0965463-58d6-571b-9944-f188fa7fdbef", 00:14:19.892 "is_configured": true, 00:14:19.892 "data_offset": 2048, 00:14:19.892 "data_size": 63488 00:14:19.892 } 00:14:19.892 ] 00:14:19.892 }' 00:14:19.892 15:22:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:19.892 15:22:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:19.892 [2024-11-20 15:22:06.289074] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:14:19.892 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:19.892 Zero copy mechanism will not be used. 00:14:19.892 Running I/O for 60 seconds... 00:14:20.151 15:22:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:20.151 15:22:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.151 15:22:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:20.151 [2024-11-20 15:22:06.594459] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:20.409 15:22:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.409 15:22:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:20.409 [2024-11-20 15:22:06.648772] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:14:20.409 [2024-11-20 15:22:06.651120] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:20.409 [2024-11-20 15:22:06.760171] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:20.409 [2024-11-20 15:22:06.760750] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:20.409 [2024-11-20 15:22:06.887719] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:20.409 [2024-11-20 15:22:06.888459] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:20.976 [2024-11-20 15:22:07.257195] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:14:21.235 125.00 IOPS, 375.00 MiB/s [2024-11-20T15:22:07.717Z] [2024-11-20 15:22:07.484718] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:21.235 [2024-11-20 15:22:07.485046] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:21.235 15:22:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:21.235 15:22:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:21.235 15:22:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:21.235 15:22:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:21.235 15:22:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:21.235 15:22:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:21.235 15:22:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:21.235 15:22:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.235 15:22:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:21.235 15:22:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.235 15:22:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:21.235 "name": "raid_bdev1", 00:14:21.235 "uuid": "447096cd-a629-4417-ac05-63744e3cd115", 00:14:21.235 "strip_size_kb": 0, 00:14:21.235 "state": "online", 00:14:21.235 "raid_level": "raid1", 00:14:21.235 "superblock": true, 00:14:21.235 "num_base_bdevs": 4, 00:14:21.235 "num_base_bdevs_discovered": 4, 00:14:21.235 "num_base_bdevs_operational": 4, 00:14:21.235 "process": { 00:14:21.235 "type": "rebuild", 00:14:21.235 "target": "spare", 00:14:21.235 "progress": { 00:14:21.235 "blocks": 10240, 00:14:21.235 "percent": 16 00:14:21.235 } 00:14:21.235 }, 00:14:21.235 "base_bdevs_list": [ 00:14:21.235 { 00:14:21.235 "name": "spare", 00:14:21.235 "uuid": "1e27b911-2299-5b10-a6f5-922f164caf6d", 00:14:21.235 "is_configured": true, 00:14:21.235 "data_offset": 2048, 00:14:21.235 "data_size": 63488 00:14:21.235 }, 00:14:21.235 { 00:14:21.235 "name": "BaseBdev2", 00:14:21.235 "uuid": "fb33f203-b41a-5f64-ae7e-d8bbe5869bbb", 00:14:21.235 "is_configured": true, 00:14:21.235 "data_offset": 2048, 00:14:21.235 "data_size": 63488 00:14:21.235 }, 00:14:21.235 { 00:14:21.235 "name": "BaseBdev3", 00:14:21.235 "uuid": "77e4c29c-56d8-5f58-9526-3ceb892b92f6", 00:14:21.235 "is_configured": true, 00:14:21.235 "data_offset": 2048, 00:14:21.235 "data_size": 63488 00:14:21.235 }, 00:14:21.235 { 00:14:21.235 "name": "BaseBdev4", 00:14:21.235 "uuid": "e0965463-58d6-571b-9944-f188fa7fdbef", 00:14:21.235 "is_configured": true, 00:14:21.235 "data_offset": 2048, 00:14:21.235 "data_size": 63488 00:14:21.235 } 00:14:21.235 ] 00:14:21.235 }' 00:14:21.235 15:22:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:21.494 15:22:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:21.494 15:22:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:21.494 15:22:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:21.494 15:22:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:21.494 15:22:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.494 15:22:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:21.494 [2024-11-20 15:22:07.795155] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:21.494 [2024-11-20 15:22:07.839135] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:14:21.494 [2024-11-20 15:22:07.954814] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:21.494 [2024-11-20 15:22:07.959080] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:21.494 [2024-11-20 15:22:07.959135] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:21.494 [2024-11-20 15:22:07.959155] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:21.754 [2024-11-20 15:22:07.982703] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:14:21.754 15:22:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.754 15:22:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:21.754 15:22:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:21.754 15:22:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:21.754 15:22:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:21.754 15:22:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:21.754 15:22:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:21.754 15:22:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:21.754 15:22:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:21.754 15:22:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:21.754 15:22:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:21.754 15:22:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:21.754 15:22:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:21.754 15:22:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.754 15:22:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:21.754 15:22:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.754 15:22:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:21.754 "name": "raid_bdev1", 00:14:21.754 "uuid": "447096cd-a629-4417-ac05-63744e3cd115", 00:14:21.754 "strip_size_kb": 0, 00:14:21.754 "state": "online", 00:14:21.754 "raid_level": "raid1", 00:14:21.754 "superblock": true, 00:14:21.754 "num_base_bdevs": 4, 00:14:21.754 "num_base_bdevs_discovered": 3, 00:14:21.754 "num_base_bdevs_operational": 3, 00:14:21.754 "base_bdevs_list": [ 00:14:21.754 { 00:14:21.754 "name": null, 00:14:21.754 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:21.754 "is_configured": false, 00:14:21.754 "data_offset": 0, 00:14:21.754 "data_size": 63488 00:14:21.754 }, 00:14:21.754 { 00:14:21.754 "name": "BaseBdev2", 00:14:21.754 "uuid": "fb33f203-b41a-5f64-ae7e-d8bbe5869bbb", 00:14:21.754 "is_configured": true, 00:14:21.754 "data_offset": 2048, 00:14:21.754 "data_size": 63488 00:14:21.754 }, 00:14:21.754 { 00:14:21.754 "name": "BaseBdev3", 00:14:21.754 "uuid": "77e4c29c-56d8-5f58-9526-3ceb892b92f6", 00:14:21.754 "is_configured": true, 00:14:21.754 "data_offset": 2048, 00:14:21.754 "data_size": 63488 00:14:21.754 }, 00:14:21.754 { 00:14:21.754 "name": "BaseBdev4", 00:14:21.754 "uuid": "e0965463-58d6-571b-9944-f188fa7fdbef", 00:14:21.754 "is_configured": true, 00:14:21.754 "data_offset": 2048, 00:14:21.754 "data_size": 63488 00:14:21.754 } 00:14:21.754 ] 00:14:21.754 }' 00:14:21.754 15:22:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:21.754 15:22:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:22.014 119.00 IOPS, 357.00 MiB/s [2024-11-20T15:22:08.496Z] 15:22:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:22.014 15:22:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:22.014 15:22:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:22.014 15:22:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:22.014 15:22:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:22.014 15:22:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:22.014 15:22:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:22.014 15:22:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.014 15:22:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:22.273 15:22:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.273 15:22:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:22.273 "name": "raid_bdev1", 00:14:22.274 "uuid": "447096cd-a629-4417-ac05-63744e3cd115", 00:14:22.274 "strip_size_kb": 0, 00:14:22.274 "state": "online", 00:14:22.274 "raid_level": "raid1", 00:14:22.274 "superblock": true, 00:14:22.274 "num_base_bdevs": 4, 00:14:22.274 "num_base_bdevs_discovered": 3, 00:14:22.274 "num_base_bdevs_operational": 3, 00:14:22.274 "base_bdevs_list": [ 00:14:22.274 { 00:14:22.274 "name": null, 00:14:22.274 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:22.274 "is_configured": false, 00:14:22.274 "data_offset": 0, 00:14:22.274 "data_size": 63488 00:14:22.274 }, 00:14:22.274 { 00:14:22.274 "name": "BaseBdev2", 00:14:22.274 "uuid": "fb33f203-b41a-5f64-ae7e-d8bbe5869bbb", 00:14:22.274 "is_configured": true, 00:14:22.274 "data_offset": 2048, 00:14:22.274 "data_size": 63488 00:14:22.274 }, 00:14:22.274 { 00:14:22.274 "name": "BaseBdev3", 00:14:22.274 "uuid": "77e4c29c-56d8-5f58-9526-3ceb892b92f6", 00:14:22.274 "is_configured": true, 00:14:22.274 "data_offset": 2048, 00:14:22.274 "data_size": 63488 00:14:22.274 }, 00:14:22.274 { 00:14:22.274 "name": "BaseBdev4", 00:14:22.274 "uuid": "e0965463-58d6-571b-9944-f188fa7fdbef", 00:14:22.274 "is_configured": true, 00:14:22.274 "data_offset": 2048, 00:14:22.274 "data_size": 63488 00:14:22.274 } 00:14:22.274 ] 00:14:22.274 }' 00:14:22.274 15:22:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:22.274 15:22:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:22.274 15:22:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:22.274 15:22:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:22.274 15:22:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:22.274 15:22:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.274 15:22:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:22.274 [2024-11-20 15:22:08.620151] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:22.274 15:22:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.274 15:22:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:22.274 [2024-11-20 15:22:08.670414] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:14:22.274 [2024-11-20 15:22:08.672881] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:22.533 [2024-11-20 15:22:08.775457] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:22.533 [2024-11-20 15:22:08.776274] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:22.533 [2024-11-20 15:22:08.902554] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:22.533 [2024-11-20 15:22:08.903528] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:22.792 [2024-11-20 15:22:09.262900] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:14:23.050 128.33 IOPS, 385.00 MiB/s [2024-11-20T15:22:09.532Z] [2024-11-20 15:22:09.380052] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:23.050 [2024-11-20 15:22:09.380613] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:23.309 [2024-11-20 15:22:09.625258] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:14:23.309 15:22:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:23.309 15:22:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:23.309 15:22:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:23.309 15:22:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:23.309 15:22:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:23.309 15:22:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:23.309 15:22:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.309 15:22:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:23.309 15:22:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:23.309 15:22:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.309 15:22:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:23.309 "name": "raid_bdev1", 00:14:23.309 "uuid": "447096cd-a629-4417-ac05-63744e3cd115", 00:14:23.309 "strip_size_kb": 0, 00:14:23.309 "state": "online", 00:14:23.309 "raid_level": "raid1", 00:14:23.309 "superblock": true, 00:14:23.309 "num_base_bdevs": 4, 00:14:23.309 "num_base_bdevs_discovered": 4, 00:14:23.309 "num_base_bdevs_operational": 4, 00:14:23.309 "process": { 00:14:23.309 "type": "rebuild", 00:14:23.309 "target": "spare", 00:14:23.309 "progress": { 00:14:23.309 "blocks": 14336, 00:14:23.309 "percent": 22 00:14:23.309 } 00:14:23.309 }, 00:14:23.309 "base_bdevs_list": [ 00:14:23.309 { 00:14:23.309 "name": "spare", 00:14:23.309 "uuid": "1e27b911-2299-5b10-a6f5-922f164caf6d", 00:14:23.309 "is_configured": true, 00:14:23.309 "data_offset": 2048, 00:14:23.309 "data_size": 63488 00:14:23.309 }, 00:14:23.309 { 00:14:23.309 "name": "BaseBdev2", 00:14:23.309 "uuid": "fb33f203-b41a-5f64-ae7e-d8bbe5869bbb", 00:14:23.309 "is_configured": true, 00:14:23.309 "data_offset": 2048, 00:14:23.309 "data_size": 63488 00:14:23.309 }, 00:14:23.309 { 00:14:23.309 "name": "BaseBdev3", 00:14:23.309 "uuid": "77e4c29c-56d8-5f58-9526-3ceb892b92f6", 00:14:23.309 "is_configured": true, 00:14:23.309 "data_offset": 2048, 00:14:23.309 "data_size": 63488 00:14:23.309 }, 00:14:23.309 { 00:14:23.309 "name": "BaseBdev4", 00:14:23.309 "uuid": "e0965463-58d6-571b-9944-f188fa7fdbef", 00:14:23.309 "is_configured": true, 00:14:23.309 "data_offset": 2048, 00:14:23.309 "data_size": 63488 00:14:23.309 } 00:14:23.309 ] 00:14:23.309 }' 00:14:23.309 15:22:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:23.309 15:22:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:23.309 [2024-11-20 15:22:09.759185] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:14:23.309 15:22:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:23.309 [2024-11-20 15:22:09.759921] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:14:23.309 15:22:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:23.309 15:22:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:14:23.309 15:22:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:14:23.309 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:14:23.309 15:22:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:14:23.309 15:22:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:14:23.309 15:22:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:14:23.309 15:22:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:23.309 15:22:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.309 15:22:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:23.568 [2024-11-20 15:22:09.791979] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:23.827 [2024-11-20 15:22:10.181643] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:14:23.827 [2024-11-20 15:22:10.181907] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:14:23.827 15:22:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.827 15:22:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:14:23.827 15:22:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:14:23.827 15:22:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:23.827 15:22:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:23.827 15:22:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:23.827 15:22:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:23.827 15:22:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:23.827 15:22:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:23.827 15:22:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:23.827 15:22:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.827 15:22:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:23.827 15:22:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.827 15:22:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:23.827 "name": "raid_bdev1", 00:14:23.827 "uuid": "447096cd-a629-4417-ac05-63744e3cd115", 00:14:23.827 "strip_size_kb": 0, 00:14:23.827 "state": "online", 00:14:23.827 "raid_level": "raid1", 00:14:23.827 "superblock": true, 00:14:23.827 "num_base_bdevs": 4, 00:14:23.827 "num_base_bdevs_discovered": 3, 00:14:23.827 "num_base_bdevs_operational": 3, 00:14:23.827 "process": { 00:14:23.827 "type": "rebuild", 00:14:23.827 "target": "spare", 00:14:23.827 "progress": { 00:14:23.827 "blocks": 18432, 00:14:23.827 "percent": 29 00:14:23.827 } 00:14:23.827 }, 00:14:23.827 "base_bdevs_list": [ 00:14:23.827 { 00:14:23.827 "name": "spare", 00:14:23.827 "uuid": "1e27b911-2299-5b10-a6f5-922f164caf6d", 00:14:23.827 "is_configured": true, 00:14:23.827 "data_offset": 2048, 00:14:23.827 "data_size": 63488 00:14:23.827 }, 00:14:23.827 { 00:14:23.827 "name": null, 00:14:23.827 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:23.827 "is_configured": false, 00:14:23.827 "data_offset": 0, 00:14:23.827 "data_size": 63488 00:14:23.827 }, 00:14:23.827 { 00:14:23.827 "name": "BaseBdev3", 00:14:23.827 "uuid": "77e4c29c-56d8-5f58-9526-3ceb892b92f6", 00:14:23.828 "is_configured": true, 00:14:23.828 "data_offset": 2048, 00:14:23.828 "data_size": 63488 00:14:23.828 }, 00:14:23.828 { 00:14:23.828 "name": "BaseBdev4", 00:14:23.828 "uuid": "e0965463-58d6-571b-9944-f188fa7fdbef", 00:14:23.828 "is_configured": true, 00:14:23.828 "data_offset": 2048, 00:14:23.828 "data_size": 63488 00:14:23.828 } 00:14:23.828 ] 00:14:23.828 }' 00:14:23.828 15:22:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:23.828 108.00 IOPS, 324.00 MiB/s [2024-11-20T15:22:10.310Z] 15:22:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:23.828 15:22:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:24.086 [2024-11-20 15:22:10.323466] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:14:24.086 15:22:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:24.086 15:22:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=494 00:14:24.086 15:22:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:24.086 15:22:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:24.086 15:22:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:24.086 15:22:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:24.086 15:22:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:24.086 15:22:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:24.086 15:22:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:24.086 15:22:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.086 15:22:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:24.086 15:22:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:24.086 15:22:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.086 15:22:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:24.086 "name": "raid_bdev1", 00:14:24.086 "uuid": "447096cd-a629-4417-ac05-63744e3cd115", 00:14:24.086 "strip_size_kb": 0, 00:14:24.086 "state": "online", 00:14:24.086 "raid_level": "raid1", 00:14:24.086 "superblock": true, 00:14:24.086 "num_base_bdevs": 4, 00:14:24.086 "num_base_bdevs_discovered": 3, 00:14:24.086 "num_base_bdevs_operational": 3, 00:14:24.086 "process": { 00:14:24.086 "type": "rebuild", 00:14:24.086 "target": "spare", 00:14:24.086 "progress": { 00:14:24.086 "blocks": 20480, 00:14:24.086 "percent": 32 00:14:24.086 } 00:14:24.086 }, 00:14:24.086 "base_bdevs_list": [ 00:14:24.086 { 00:14:24.086 "name": "spare", 00:14:24.086 "uuid": "1e27b911-2299-5b10-a6f5-922f164caf6d", 00:14:24.086 "is_configured": true, 00:14:24.086 "data_offset": 2048, 00:14:24.086 "data_size": 63488 00:14:24.086 }, 00:14:24.086 { 00:14:24.086 "name": null, 00:14:24.086 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:24.086 "is_configured": false, 00:14:24.086 "data_offset": 0, 00:14:24.086 "data_size": 63488 00:14:24.086 }, 00:14:24.086 { 00:14:24.086 "name": "BaseBdev3", 00:14:24.086 "uuid": "77e4c29c-56d8-5f58-9526-3ceb892b92f6", 00:14:24.086 "is_configured": true, 00:14:24.086 "data_offset": 2048, 00:14:24.086 "data_size": 63488 00:14:24.086 }, 00:14:24.086 { 00:14:24.086 "name": "BaseBdev4", 00:14:24.086 "uuid": "e0965463-58d6-571b-9944-f188fa7fdbef", 00:14:24.086 "is_configured": true, 00:14:24.086 "data_offset": 2048, 00:14:24.086 "data_size": 63488 00:14:24.086 } 00:14:24.086 ] 00:14:24.086 }' 00:14:24.086 15:22:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:24.086 15:22:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:24.086 15:22:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:24.086 [2024-11-20 15:22:10.447402] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:14:24.086 15:22:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:24.086 15:22:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:24.346 [2024-11-20 15:22:10.724690] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:14:25.171 96.60 IOPS, 289.80 MiB/s [2024-11-20T15:22:11.653Z] 15:22:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:25.171 15:22:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:25.171 15:22:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:25.171 15:22:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:25.172 15:22:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:25.172 15:22:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:25.172 15:22:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:25.172 15:22:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.172 15:22:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:25.172 15:22:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:25.172 15:22:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.172 15:22:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:25.172 "name": "raid_bdev1", 00:14:25.172 "uuid": "447096cd-a629-4417-ac05-63744e3cd115", 00:14:25.172 "strip_size_kb": 0, 00:14:25.172 "state": "online", 00:14:25.172 "raid_level": "raid1", 00:14:25.172 "superblock": true, 00:14:25.172 "num_base_bdevs": 4, 00:14:25.172 "num_base_bdevs_discovered": 3, 00:14:25.172 "num_base_bdevs_operational": 3, 00:14:25.172 "process": { 00:14:25.172 "type": "rebuild", 00:14:25.172 "target": "spare", 00:14:25.172 "progress": { 00:14:25.172 "blocks": 36864, 00:14:25.172 "percent": 58 00:14:25.172 } 00:14:25.172 }, 00:14:25.172 "base_bdevs_list": [ 00:14:25.172 { 00:14:25.172 "name": "spare", 00:14:25.172 "uuid": "1e27b911-2299-5b10-a6f5-922f164caf6d", 00:14:25.172 "is_configured": true, 00:14:25.172 "data_offset": 2048, 00:14:25.172 "data_size": 63488 00:14:25.172 }, 00:14:25.172 { 00:14:25.172 "name": null, 00:14:25.172 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:25.172 "is_configured": false, 00:14:25.172 "data_offset": 0, 00:14:25.172 "data_size": 63488 00:14:25.172 }, 00:14:25.172 { 00:14:25.172 "name": "BaseBdev3", 00:14:25.172 "uuid": "77e4c29c-56d8-5f58-9526-3ceb892b92f6", 00:14:25.172 "is_configured": true, 00:14:25.172 "data_offset": 2048, 00:14:25.172 "data_size": 63488 00:14:25.172 }, 00:14:25.172 { 00:14:25.172 "name": "BaseBdev4", 00:14:25.172 "uuid": "e0965463-58d6-571b-9944-f188fa7fdbef", 00:14:25.172 "is_configured": true, 00:14:25.172 "data_offset": 2048, 00:14:25.172 "data_size": 63488 00:14:25.172 } 00:14:25.172 ] 00:14:25.172 }' 00:14:25.172 15:22:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:25.172 15:22:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:25.172 15:22:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:25.172 15:22:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:25.172 15:22:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:25.172 [2024-11-20 15:22:11.644038] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:14:25.835 [2024-11-20 15:22:11.974545] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:14:25.835 [2024-11-20 15:22:12.185014] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:14:26.094 88.17 IOPS, 264.50 MiB/s [2024-11-20T15:22:12.576Z] [2024-11-20 15:22:12.408541] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:14:26.353 15:22:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:26.353 15:22:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:26.353 15:22:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:26.353 15:22:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:26.353 15:22:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:26.353 15:22:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:26.353 15:22:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:26.353 15:22:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.353 15:22:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:26.353 15:22:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:26.353 15:22:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.353 15:22:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:26.353 "name": "raid_bdev1", 00:14:26.353 "uuid": "447096cd-a629-4417-ac05-63744e3cd115", 00:14:26.353 "strip_size_kb": 0, 00:14:26.353 "state": "online", 00:14:26.353 "raid_level": "raid1", 00:14:26.353 "superblock": true, 00:14:26.353 "num_base_bdevs": 4, 00:14:26.353 "num_base_bdevs_discovered": 3, 00:14:26.353 "num_base_bdevs_operational": 3, 00:14:26.353 "process": { 00:14:26.353 "type": "rebuild", 00:14:26.353 "target": "spare", 00:14:26.353 "progress": { 00:14:26.353 "blocks": 53248, 00:14:26.353 "percent": 83 00:14:26.353 } 00:14:26.353 }, 00:14:26.353 "base_bdevs_list": [ 00:14:26.353 { 00:14:26.353 "name": "spare", 00:14:26.353 "uuid": "1e27b911-2299-5b10-a6f5-922f164caf6d", 00:14:26.353 "is_configured": true, 00:14:26.353 "data_offset": 2048, 00:14:26.353 "data_size": 63488 00:14:26.353 }, 00:14:26.353 { 00:14:26.353 "name": null, 00:14:26.353 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:26.353 "is_configured": false, 00:14:26.353 "data_offset": 0, 00:14:26.353 "data_size": 63488 00:14:26.353 }, 00:14:26.353 { 00:14:26.353 "name": "BaseBdev3", 00:14:26.353 "uuid": "77e4c29c-56d8-5f58-9526-3ceb892b92f6", 00:14:26.353 "is_configured": true, 00:14:26.353 "data_offset": 2048, 00:14:26.353 "data_size": 63488 00:14:26.353 }, 00:14:26.353 { 00:14:26.353 "name": "BaseBdev4", 00:14:26.353 "uuid": "e0965463-58d6-571b-9944-f188fa7fdbef", 00:14:26.353 "is_configured": true, 00:14:26.353 "data_offset": 2048, 00:14:26.353 "data_size": 63488 00:14:26.353 } 00:14:26.353 ] 00:14:26.353 }' 00:14:26.353 15:22:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:26.353 15:22:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:26.353 15:22:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:26.353 15:22:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:26.353 15:22:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:26.612 [2024-11-20 15:22:13.075446] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:26.872 [2024-11-20 15:22:13.178531] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:26.872 [2024-11-20 15:22:13.181738] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:27.440 80.71 IOPS, 242.14 MiB/s [2024-11-20T15:22:13.922Z] 15:22:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:27.440 15:22:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:27.440 15:22:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:27.440 15:22:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:27.440 15:22:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:27.440 15:22:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:27.440 15:22:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:27.440 15:22:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:27.440 15:22:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.440 15:22:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:27.440 15:22:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.440 15:22:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:27.440 "name": "raid_bdev1", 00:14:27.440 "uuid": "447096cd-a629-4417-ac05-63744e3cd115", 00:14:27.440 "strip_size_kb": 0, 00:14:27.440 "state": "online", 00:14:27.440 "raid_level": "raid1", 00:14:27.440 "superblock": true, 00:14:27.440 "num_base_bdevs": 4, 00:14:27.440 "num_base_bdevs_discovered": 3, 00:14:27.440 "num_base_bdevs_operational": 3, 00:14:27.440 "base_bdevs_list": [ 00:14:27.440 { 00:14:27.440 "name": "spare", 00:14:27.440 "uuid": "1e27b911-2299-5b10-a6f5-922f164caf6d", 00:14:27.440 "is_configured": true, 00:14:27.440 "data_offset": 2048, 00:14:27.440 "data_size": 63488 00:14:27.440 }, 00:14:27.440 { 00:14:27.440 "name": null, 00:14:27.440 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:27.440 "is_configured": false, 00:14:27.440 "data_offset": 0, 00:14:27.440 "data_size": 63488 00:14:27.440 }, 00:14:27.440 { 00:14:27.440 "name": "BaseBdev3", 00:14:27.440 "uuid": "77e4c29c-56d8-5f58-9526-3ceb892b92f6", 00:14:27.440 "is_configured": true, 00:14:27.440 "data_offset": 2048, 00:14:27.440 "data_size": 63488 00:14:27.440 }, 00:14:27.440 { 00:14:27.440 "name": "BaseBdev4", 00:14:27.440 "uuid": "e0965463-58d6-571b-9944-f188fa7fdbef", 00:14:27.440 "is_configured": true, 00:14:27.440 "data_offset": 2048, 00:14:27.440 "data_size": 63488 00:14:27.440 } 00:14:27.440 ] 00:14:27.440 }' 00:14:27.440 15:22:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:27.440 15:22:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:27.440 15:22:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:27.440 15:22:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:27.440 15:22:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:14:27.440 15:22:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:27.440 15:22:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:27.440 15:22:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:27.440 15:22:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:27.440 15:22:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:27.440 15:22:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:27.440 15:22:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:27.440 15:22:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.440 15:22:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:27.440 15:22:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.700 15:22:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:27.700 "name": "raid_bdev1", 00:14:27.700 "uuid": "447096cd-a629-4417-ac05-63744e3cd115", 00:14:27.700 "strip_size_kb": 0, 00:14:27.700 "state": "online", 00:14:27.700 "raid_level": "raid1", 00:14:27.700 "superblock": true, 00:14:27.700 "num_base_bdevs": 4, 00:14:27.700 "num_base_bdevs_discovered": 3, 00:14:27.700 "num_base_bdevs_operational": 3, 00:14:27.700 "base_bdevs_list": [ 00:14:27.700 { 00:14:27.700 "name": "spare", 00:14:27.700 "uuid": "1e27b911-2299-5b10-a6f5-922f164caf6d", 00:14:27.700 "is_configured": true, 00:14:27.700 "data_offset": 2048, 00:14:27.700 "data_size": 63488 00:14:27.700 }, 00:14:27.700 { 00:14:27.700 "name": null, 00:14:27.700 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:27.700 "is_configured": false, 00:14:27.700 "data_offset": 0, 00:14:27.700 "data_size": 63488 00:14:27.700 }, 00:14:27.700 { 00:14:27.700 "name": "BaseBdev3", 00:14:27.700 "uuid": "77e4c29c-56d8-5f58-9526-3ceb892b92f6", 00:14:27.700 "is_configured": true, 00:14:27.700 "data_offset": 2048, 00:14:27.700 "data_size": 63488 00:14:27.700 }, 00:14:27.700 { 00:14:27.700 "name": "BaseBdev4", 00:14:27.700 "uuid": "e0965463-58d6-571b-9944-f188fa7fdbef", 00:14:27.700 "is_configured": true, 00:14:27.700 "data_offset": 2048, 00:14:27.700 "data_size": 63488 00:14:27.700 } 00:14:27.700 ] 00:14:27.700 }' 00:14:27.700 15:22:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:27.700 15:22:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:27.700 15:22:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:27.700 15:22:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:27.700 15:22:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:27.700 15:22:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:27.700 15:22:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:27.700 15:22:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:27.700 15:22:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:27.700 15:22:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:27.700 15:22:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:27.700 15:22:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:27.700 15:22:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:27.700 15:22:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:27.700 15:22:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:27.700 15:22:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:27.700 15:22:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.700 15:22:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:27.700 15:22:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.700 15:22:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:27.700 "name": "raid_bdev1", 00:14:27.700 "uuid": "447096cd-a629-4417-ac05-63744e3cd115", 00:14:27.700 "strip_size_kb": 0, 00:14:27.700 "state": "online", 00:14:27.700 "raid_level": "raid1", 00:14:27.700 "superblock": true, 00:14:27.700 "num_base_bdevs": 4, 00:14:27.700 "num_base_bdevs_discovered": 3, 00:14:27.700 "num_base_bdevs_operational": 3, 00:14:27.700 "base_bdevs_list": [ 00:14:27.700 { 00:14:27.700 "name": "spare", 00:14:27.700 "uuid": "1e27b911-2299-5b10-a6f5-922f164caf6d", 00:14:27.700 "is_configured": true, 00:14:27.700 "data_offset": 2048, 00:14:27.700 "data_size": 63488 00:14:27.700 }, 00:14:27.700 { 00:14:27.700 "name": null, 00:14:27.700 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:27.700 "is_configured": false, 00:14:27.700 "data_offset": 0, 00:14:27.700 "data_size": 63488 00:14:27.700 }, 00:14:27.700 { 00:14:27.700 "name": "BaseBdev3", 00:14:27.700 "uuid": "77e4c29c-56d8-5f58-9526-3ceb892b92f6", 00:14:27.700 "is_configured": true, 00:14:27.700 "data_offset": 2048, 00:14:27.700 "data_size": 63488 00:14:27.700 }, 00:14:27.700 { 00:14:27.700 "name": "BaseBdev4", 00:14:27.700 "uuid": "e0965463-58d6-571b-9944-f188fa7fdbef", 00:14:27.700 "is_configured": true, 00:14:27.700 "data_offset": 2048, 00:14:27.700 "data_size": 63488 00:14:27.700 } 00:14:27.700 ] 00:14:27.700 }' 00:14:27.700 15:22:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:27.700 15:22:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:27.959 74.88 IOPS, 224.62 MiB/s [2024-11-20T15:22:14.441Z] 15:22:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:27.959 15:22:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.959 15:22:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:27.959 [2024-11-20 15:22:14.398255] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:27.959 [2024-11-20 15:22:14.398489] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:28.218 00:14:28.218 Latency(us) 00:14:28.218 [2024-11-20T15:22:14.700Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:28.218 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:14:28.218 raid_bdev1 : 8.22 73.60 220.81 0.00 0.00 18455.77 399.73 116227.70 00:14:28.218 [2024-11-20T15:22:14.700Z] =================================================================================================================== 00:14:28.218 [2024-11-20T15:22:14.700Z] Total : 73.60 220.81 0.00 0.00 18455.77 399.73 116227.70 00:14:28.218 [2024-11-20 15:22:14.520574] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:28.218 [2024-11-20 15:22:14.520694] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:28.218 [2024-11-20 15:22:14.520844] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:28.218 [2024-11-20 15:22:14.520865] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:28.218 { 00:14:28.218 "results": [ 00:14:28.218 { 00:14:28.218 "job": "raid_bdev1", 00:14:28.218 "core_mask": "0x1", 00:14:28.218 "workload": "randrw", 00:14:28.218 "percentage": 50, 00:14:28.218 "status": "finished", 00:14:28.218 "queue_depth": 2, 00:14:28.218 "io_size": 3145728, 00:14:28.218 "runtime": 8.219838, 00:14:28.218 "iops": 73.60242379472686, 00:14:28.218 "mibps": 220.80727138418058, 00:14:28.218 "io_failed": 0, 00:14:28.218 "io_timeout": 0, 00:14:28.218 "avg_latency_us": 18455.766561120516, 00:14:28.218 "min_latency_us": 399.7301204819277, 00:14:28.218 "max_latency_us": 116227.70120481927 00:14:28.218 } 00:14:28.218 ], 00:14:28.218 "core_count": 1 00:14:28.218 } 00:14:28.218 15:22:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.218 15:22:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:28.218 15:22:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:14:28.218 15:22:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.218 15:22:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:28.218 15:22:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.218 15:22:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:28.218 15:22:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:28.218 15:22:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:14:28.218 15:22:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:14:28.218 15:22:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:28.218 15:22:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:14:28.218 15:22:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:28.218 15:22:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:28.218 15:22:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:28.218 15:22:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:14:28.218 15:22:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:28.218 15:22:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:28.218 15:22:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:14:28.478 /dev/nbd0 00:14:28.478 15:22:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:28.478 15:22:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:28.478 15:22:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:28.478 15:22:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:14:28.478 15:22:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:28.478 15:22:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:28.478 15:22:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:28.478 15:22:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:14:28.478 15:22:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:28.478 15:22:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:28.478 15:22:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:28.478 1+0 records in 00:14:28.478 1+0 records out 00:14:28.478 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000430971 s, 9.5 MB/s 00:14:28.478 15:22:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:28.478 15:22:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:14:28.478 15:22:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:28.478 15:22:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:28.478 15:22:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:14:28.478 15:22:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:28.478 15:22:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:28.478 15:22:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:28.478 15:22:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:14:28.478 15:22:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@728 -- # continue 00:14:28.478 15:22:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:28.478 15:22:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:14:28.478 15:22:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:14:28.479 15:22:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:28.479 15:22:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:14:28.479 15:22:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:28.479 15:22:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:14:28.479 15:22:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:28.479 15:22:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:14:28.479 15:22:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:28.479 15:22:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:28.479 15:22:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:14:28.738 /dev/nbd1 00:14:28.738 15:22:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:28.738 15:22:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:28.738 15:22:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:14:28.738 15:22:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:14:28.738 15:22:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:28.738 15:22:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:28.738 15:22:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:14:28.738 15:22:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:14:28.738 15:22:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:28.738 15:22:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:28.738 15:22:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:28.738 1+0 records in 00:14:28.738 1+0 records out 00:14:28.738 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000484839 s, 8.4 MB/s 00:14:28.997 15:22:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:28.997 15:22:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:14:28.997 15:22:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:28.998 15:22:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:28.998 15:22:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:14:28.998 15:22:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:28.998 15:22:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:28.998 15:22:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:14:28.998 15:22:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:14:28.998 15:22:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:28.998 15:22:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:14:28.998 15:22:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:28.998 15:22:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:14:28.998 15:22:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:28.998 15:22:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:29.257 15:22:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:29.257 15:22:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:29.257 15:22:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:29.257 15:22:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:29.257 15:22:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:29.257 15:22:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:29.257 15:22:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:14:29.257 15:22:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:29.257 15:22:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:29.257 15:22:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:14:29.257 15:22:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:14:29.257 15:22:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:29.257 15:22:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:14:29.257 15:22:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:29.257 15:22:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:14:29.257 15:22:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:29.257 15:22:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:14:29.257 15:22:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:29.257 15:22:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:29.257 15:22:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:14:29.516 /dev/nbd1 00:14:29.517 15:22:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:29.517 15:22:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:29.517 15:22:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:14:29.517 15:22:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:14:29.517 15:22:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:29.517 15:22:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:29.517 15:22:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:14:29.517 15:22:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:14:29.517 15:22:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:29.517 15:22:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:29.517 15:22:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:29.517 1+0 records in 00:14:29.517 1+0 records out 00:14:29.517 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000440089 s, 9.3 MB/s 00:14:29.517 15:22:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:29.517 15:22:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:14:29.517 15:22:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:29.517 15:22:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:29.517 15:22:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:14:29.517 15:22:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:29.517 15:22:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:29.517 15:22:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:14:29.517 15:22:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:14:29.517 15:22:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:29.517 15:22:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:14:29.517 15:22:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:29.517 15:22:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:14:29.517 15:22:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:29.517 15:22:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:29.777 15:22:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:29.777 15:22:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:29.777 15:22:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:29.777 15:22:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:29.777 15:22:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:29.777 15:22:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:29.777 15:22:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:14:29.777 15:22:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:29.777 15:22:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:29.777 15:22:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:29.777 15:22:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:29.777 15:22:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:29.777 15:22:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:14:29.777 15:22:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:29.777 15:22:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:30.036 15:22:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:30.036 15:22:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:30.036 15:22:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:30.036 15:22:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:30.036 15:22:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:30.036 15:22:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:30.036 15:22:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:14:30.036 15:22:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:30.036 15:22:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:14:30.036 15:22:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:14:30.036 15:22:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.036 15:22:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:30.037 15:22:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.037 15:22:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:30.037 15:22:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.037 15:22:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:30.037 [2024-11-20 15:22:16.504045] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:30.037 [2024-11-20 15:22:16.504117] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:30.037 [2024-11-20 15:22:16.504143] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:14:30.037 [2024-11-20 15:22:16.504154] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:30.037 [2024-11-20 15:22:16.506786] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:30.037 [2024-11-20 15:22:16.506832] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:30.037 [2024-11-20 15:22:16.506942] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:30.037 [2024-11-20 15:22:16.507007] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:30.037 [2024-11-20 15:22:16.507161] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:30.037 [2024-11-20 15:22:16.507272] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:30.037 spare 00:14:30.037 15:22:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.037 15:22:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:14:30.037 15:22:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.037 15:22:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:30.296 [2024-11-20 15:22:16.607224] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:14:30.296 [2024-11-20 15:22:16.607280] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:30.296 [2024-11-20 15:22:16.607675] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037160 00:14:30.296 [2024-11-20 15:22:16.607881] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:14:30.296 [2024-11-20 15:22:16.607897] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:14:30.296 [2024-11-20 15:22:16.608101] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:30.296 15:22:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.296 15:22:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:30.296 15:22:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:30.296 15:22:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:30.296 15:22:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:30.296 15:22:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:30.296 15:22:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:30.296 15:22:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:30.296 15:22:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:30.296 15:22:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:30.296 15:22:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:30.296 15:22:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:30.296 15:22:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:30.296 15:22:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.296 15:22:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:30.296 15:22:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.296 15:22:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:30.296 "name": "raid_bdev1", 00:14:30.296 "uuid": "447096cd-a629-4417-ac05-63744e3cd115", 00:14:30.296 "strip_size_kb": 0, 00:14:30.296 "state": "online", 00:14:30.296 "raid_level": "raid1", 00:14:30.296 "superblock": true, 00:14:30.296 "num_base_bdevs": 4, 00:14:30.296 "num_base_bdevs_discovered": 3, 00:14:30.296 "num_base_bdevs_operational": 3, 00:14:30.296 "base_bdevs_list": [ 00:14:30.296 { 00:14:30.296 "name": "spare", 00:14:30.296 "uuid": "1e27b911-2299-5b10-a6f5-922f164caf6d", 00:14:30.296 "is_configured": true, 00:14:30.296 "data_offset": 2048, 00:14:30.296 "data_size": 63488 00:14:30.296 }, 00:14:30.296 { 00:14:30.296 "name": null, 00:14:30.296 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:30.296 "is_configured": false, 00:14:30.296 "data_offset": 2048, 00:14:30.296 "data_size": 63488 00:14:30.296 }, 00:14:30.296 { 00:14:30.296 "name": "BaseBdev3", 00:14:30.296 "uuid": "77e4c29c-56d8-5f58-9526-3ceb892b92f6", 00:14:30.297 "is_configured": true, 00:14:30.297 "data_offset": 2048, 00:14:30.297 "data_size": 63488 00:14:30.297 }, 00:14:30.297 { 00:14:30.297 "name": "BaseBdev4", 00:14:30.297 "uuid": "e0965463-58d6-571b-9944-f188fa7fdbef", 00:14:30.297 "is_configured": true, 00:14:30.297 "data_offset": 2048, 00:14:30.297 "data_size": 63488 00:14:30.297 } 00:14:30.297 ] 00:14:30.297 }' 00:14:30.297 15:22:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:30.297 15:22:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:30.864 15:22:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:30.864 15:22:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:30.864 15:22:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:30.864 15:22:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:30.864 15:22:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:30.864 15:22:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:30.864 15:22:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.864 15:22:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:30.864 15:22:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:30.864 15:22:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.864 15:22:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:30.864 "name": "raid_bdev1", 00:14:30.864 "uuid": "447096cd-a629-4417-ac05-63744e3cd115", 00:14:30.864 "strip_size_kb": 0, 00:14:30.864 "state": "online", 00:14:30.864 "raid_level": "raid1", 00:14:30.864 "superblock": true, 00:14:30.864 "num_base_bdevs": 4, 00:14:30.864 "num_base_bdevs_discovered": 3, 00:14:30.864 "num_base_bdevs_operational": 3, 00:14:30.864 "base_bdevs_list": [ 00:14:30.864 { 00:14:30.864 "name": "spare", 00:14:30.864 "uuid": "1e27b911-2299-5b10-a6f5-922f164caf6d", 00:14:30.864 "is_configured": true, 00:14:30.864 "data_offset": 2048, 00:14:30.864 "data_size": 63488 00:14:30.864 }, 00:14:30.864 { 00:14:30.864 "name": null, 00:14:30.864 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:30.864 "is_configured": false, 00:14:30.864 "data_offset": 2048, 00:14:30.865 "data_size": 63488 00:14:30.865 }, 00:14:30.865 { 00:14:30.865 "name": "BaseBdev3", 00:14:30.865 "uuid": "77e4c29c-56d8-5f58-9526-3ceb892b92f6", 00:14:30.865 "is_configured": true, 00:14:30.865 "data_offset": 2048, 00:14:30.865 "data_size": 63488 00:14:30.865 }, 00:14:30.865 { 00:14:30.865 "name": "BaseBdev4", 00:14:30.865 "uuid": "e0965463-58d6-571b-9944-f188fa7fdbef", 00:14:30.865 "is_configured": true, 00:14:30.865 "data_offset": 2048, 00:14:30.865 "data_size": 63488 00:14:30.865 } 00:14:30.865 ] 00:14:30.865 }' 00:14:30.865 15:22:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:30.865 15:22:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:30.865 15:22:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:30.865 15:22:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:30.865 15:22:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:30.865 15:22:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.865 15:22:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:30.865 15:22:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:14:30.865 15:22:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.865 15:22:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:14:30.865 15:22:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:30.865 15:22:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.865 15:22:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:30.865 [2024-11-20 15:22:17.223351] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:30.865 15:22:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.865 15:22:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:30.865 15:22:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:30.865 15:22:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:30.865 15:22:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:30.865 15:22:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:30.865 15:22:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:30.865 15:22:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:30.865 15:22:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:30.865 15:22:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:30.865 15:22:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:30.865 15:22:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:30.865 15:22:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.865 15:22:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:30.865 15:22:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:30.865 15:22:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.865 15:22:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:30.865 "name": "raid_bdev1", 00:14:30.865 "uuid": "447096cd-a629-4417-ac05-63744e3cd115", 00:14:30.865 "strip_size_kb": 0, 00:14:30.865 "state": "online", 00:14:30.865 "raid_level": "raid1", 00:14:30.865 "superblock": true, 00:14:30.865 "num_base_bdevs": 4, 00:14:30.865 "num_base_bdevs_discovered": 2, 00:14:30.865 "num_base_bdevs_operational": 2, 00:14:30.865 "base_bdevs_list": [ 00:14:30.865 { 00:14:30.865 "name": null, 00:14:30.865 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:30.865 "is_configured": false, 00:14:30.865 "data_offset": 0, 00:14:30.865 "data_size": 63488 00:14:30.865 }, 00:14:30.865 { 00:14:30.865 "name": null, 00:14:30.865 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:30.865 "is_configured": false, 00:14:30.865 "data_offset": 2048, 00:14:30.865 "data_size": 63488 00:14:30.865 }, 00:14:30.865 { 00:14:30.865 "name": "BaseBdev3", 00:14:30.865 "uuid": "77e4c29c-56d8-5f58-9526-3ceb892b92f6", 00:14:30.865 "is_configured": true, 00:14:30.865 "data_offset": 2048, 00:14:30.865 "data_size": 63488 00:14:30.865 }, 00:14:30.865 { 00:14:30.865 "name": "BaseBdev4", 00:14:30.865 "uuid": "e0965463-58d6-571b-9944-f188fa7fdbef", 00:14:30.865 "is_configured": true, 00:14:30.865 "data_offset": 2048, 00:14:30.865 "data_size": 63488 00:14:30.865 } 00:14:30.865 ] 00:14:30.865 }' 00:14:30.865 15:22:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:30.865 15:22:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:31.432 15:22:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:31.432 15:22:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.432 15:22:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:31.432 [2024-11-20 15:22:17.646895] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:31.432 [2024-11-20 15:22:17.647121] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:14:31.432 [2024-11-20 15:22:17.647138] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:31.432 [2024-11-20 15:22:17.647183] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:31.432 [2024-11-20 15:22:17.662431] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037230 00:14:31.432 15:22:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.432 15:22:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:14:31.432 [2024-11-20 15:22:17.664822] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:32.386 15:22:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:32.386 15:22:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:32.386 15:22:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:32.386 15:22:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:32.386 15:22:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:32.387 15:22:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:32.387 15:22:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:32.387 15:22:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.387 15:22:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:32.387 15:22:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.387 15:22:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:32.387 "name": "raid_bdev1", 00:14:32.387 "uuid": "447096cd-a629-4417-ac05-63744e3cd115", 00:14:32.387 "strip_size_kb": 0, 00:14:32.387 "state": "online", 00:14:32.387 "raid_level": "raid1", 00:14:32.387 "superblock": true, 00:14:32.387 "num_base_bdevs": 4, 00:14:32.387 "num_base_bdevs_discovered": 3, 00:14:32.387 "num_base_bdevs_operational": 3, 00:14:32.387 "process": { 00:14:32.387 "type": "rebuild", 00:14:32.387 "target": "spare", 00:14:32.387 "progress": { 00:14:32.387 "blocks": 20480, 00:14:32.387 "percent": 32 00:14:32.387 } 00:14:32.387 }, 00:14:32.387 "base_bdevs_list": [ 00:14:32.387 { 00:14:32.387 "name": "spare", 00:14:32.387 "uuid": "1e27b911-2299-5b10-a6f5-922f164caf6d", 00:14:32.387 "is_configured": true, 00:14:32.387 "data_offset": 2048, 00:14:32.387 "data_size": 63488 00:14:32.387 }, 00:14:32.387 { 00:14:32.387 "name": null, 00:14:32.387 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:32.387 "is_configured": false, 00:14:32.387 "data_offset": 2048, 00:14:32.387 "data_size": 63488 00:14:32.387 }, 00:14:32.387 { 00:14:32.387 "name": "BaseBdev3", 00:14:32.387 "uuid": "77e4c29c-56d8-5f58-9526-3ceb892b92f6", 00:14:32.387 "is_configured": true, 00:14:32.387 "data_offset": 2048, 00:14:32.387 "data_size": 63488 00:14:32.387 }, 00:14:32.387 { 00:14:32.387 "name": "BaseBdev4", 00:14:32.387 "uuid": "e0965463-58d6-571b-9944-f188fa7fdbef", 00:14:32.387 "is_configured": true, 00:14:32.387 "data_offset": 2048, 00:14:32.387 "data_size": 63488 00:14:32.387 } 00:14:32.387 ] 00:14:32.387 }' 00:14:32.387 15:22:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:32.387 15:22:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:32.387 15:22:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:32.387 15:22:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:32.387 15:22:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:14:32.387 15:22:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.387 15:22:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:32.387 [2024-11-20 15:22:18.820338] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:32.646 [2024-11-20 15:22:18.870507] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:32.646 [2024-11-20 15:22:18.870596] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:32.646 [2024-11-20 15:22:18.870623] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:32.646 [2024-11-20 15:22:18.870633] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:32.646 15:22:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.646 15:22:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:32.646 15:22:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:32.646 15:22:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:32.646 15:22:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:32.646 15:22:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:32.646 15:22:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:32.646 15:22:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:32.646 15:22:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:32.646 15:22:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:32.646 15:22:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:32.646 15:22:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:32.646 15:22:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.646 15:22:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:32.646 15:22:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:32.646 15:22:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.646 15:22:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:32.646 "name": "raid_bdev1", 00:14:32.646 "uuid": "447096cd-a629-4417-ac05-63744e3cd115", 00:14:32.646 "strip_size_kb": 0, 00:14:32.646 "state": "online", 00:14:32.646 "raid_level": "raid1", 00:14:32.646 "superblock": true, 00:14:32.646 "num_base_bdevs": 4, 00:14:32.646 "num_base_bdevs_discovered": 2, 00:14:32.646 "num_base_bdevs_operational": 2, 00:14:32.646 "base_bdevs_list": [ 00:14:32.646 { 00:14:32.646 "name": null, 00:14:32.646 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:32.646 "is_configured": false, 00:14:32.646 "data_offset": 0, 00:14:32.646 "data_size": 63488 00:14:32.646 }, 00:14:32.646 { 00:14:32.647 "name": null, 00:14:32.647 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:32.647 "is_configured": false, 00:14:32.647 "data_offset": 2048, 00:14:32.647 "data_size": 63488 00:14:32.647 }, 00:14:32.647 { 00:14:32.647 "name": "BaseBdev3", 00:14:32.647 "uuid": "77e4c29c-56d8-5f58-9526-3ceb892b92f6", 00:14:32.647 "is_configured": true, 00:14:32.647 "data_offset": 2048, 00:14:32.647 "data_size": 63488 00:14:32.647 }, 00:14:32.647 { 00:14:32.647 "name": "BaseBdev4", 00:14:32.647 "uuid": "e0965463-58d6-571b-9944-f188fa7fdbef", 00:14:32.647 "is_configured": true, 00:14:32.647 "data_offset": 2048, 00:14:32.647 "data_size": 63488 00:14:32.647 } 00:14:32.647 ] 00:14:32.647 }' 00:14:32.647 15:22:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:32.647 15:22:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:32.905 15:22:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:32.905 15:22:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.905 15:22:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:32.905 [2024-11-20 15:22:19.315956] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:32.905 [2024-11-20 15:22:19.316048] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:32.905 [2024-11-20 15:22:19.316083] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:14:32.905 [2024-11-20 15:22:19.316097] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:32.905 [2024-11-20 15:22:19.316610] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:32.905 [2024-11-20 15:22:19.316633] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:32.905 [2024-11-20 15:22:19.316756] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:32.905 [2024-11-20 15:22:19.316771] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:14:32.905 [2024-11-20 15:22:19.316786] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:32.905 [2024-11-20 15:22:19.316810] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:32.905 [2024-11-20 15:22:19.332283] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037300 00:14:32.905 spare 00:14:32.905 15:22:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.905 15:22:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:14:32.905 [2024-11-20 15:22:19.334499] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:33.876 15:22:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:33.876 15:22:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:33.876 15:22:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:33.876 15:22:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:33.876 15:22:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:33.876 15:22:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:33.876 15:22:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:33.876 15:22:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.876 15:22:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:34.135 15:22:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.135 15:22:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:34.135 "name": "raid_bdev1", 00:14:34.135 "uuid": "447096cd-a629-4417-ac05-63744e3cd115", 00:14:34.135 "strip_size_kb": 0, 00:14:34.135 "state": "online", 00:14:34.135 "raid_level": "raid1", 00:14:34.135 "superblock": true, 00:14:34.135 "num_base_bdevs": 4, 00:14:34.135 "num_base_bdevs_discovered": 3, 00:14:34.135 "num_base_bdevs_operational": 3, 00:14:34.135 "process": { 00:14:34.135 "type": "rebuild", 00:14:34.135 "target": "spare", 00:14:34.135 "progress": { 00:14:34.135 "blocks": 20480, 00:14:34.135 "percent": 32 00:14:34.135 } 00:14:34.135 }, 00:14:34.135 "base_bdevs_list": [ 00:14:34.135 { 00:14:34.135 "name": "spare", 00:14:34.135 "uuid": "1e27b911-2299-5b10-a6f5-922f164caf6d", 00:14:34.135 "is_configured": true, 00:14:34.135 "data_offset": 2048, 00:14:34.135 "data_size": 63488 00:14:34.135 }, 00:14:34.135 { 00:14:34.135 "name": null, 00:14:34.135 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:34.135 "is_configured": false, 00:14:34.135 "data_offset": 2048, 00:14:34.135 "data_size": 63488 00:14:34.135 }, 00:14:34.135 { 00:14:34.135 "name": "BaseBdev3", 00:14:34.135 "uuid": "77e4c29c-56d8-5f58-9526-3ceb892b92f6", 00:14:34.135 "is_configured": true, 00:14:34.135 "data_offset": 2048, 00:14:34.135 "data_size": 63488 00:14:34.136 }, 00:14:34.136 { 00:14:34.136 "name": "BaseBdev4", 00:14:34.136 "uuid": "e0965463-58d6-571b-9944-f188fa7fdbef", 00:14:34.136 "is_configured": true, 00:14:34.136 "data_offset": 2048, 00:14:34.136 "data_size": 63488 00:14:34.136 } 00:14:34.136 ] 00:14:34.136 }' 00:14:34.136 15:22:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:34.136 15:22:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:34.136 15:22:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:34.136 15:22:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:34.136 15:22:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:14:34.136 15:22:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.136 15:22:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:34.136 [2024-11-20 15:22:20.462929] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:34.136 [2024-11-20 15:22:20.540269] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:34.136 [2024-11-20 15:22:20.540387] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:34.136 [2024-11-20 15:22:20.540406] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:34.136 [2024-11-20 15:22:20.540419] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:34.136 15:22:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.136 15:22:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:34.136 15:22:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:34.136 15:22:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:34.136 15:22:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:34.136 15:22:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:34.136 15:22:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:34.136 15:22:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:34.136 15:22:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:34.136 15:22:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:34.136 15:22:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:34.136 15:22:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:34.136 15:22:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.136 15:22:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:34.136 15:22:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:34.136 15:22:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.394 15:22:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:34.394 "name": "raid_bdev1", 00:14:34.394 "uuid": "447096cd-a629-4417-ac05-63744e3cd115", 00:14:34.394 "strip_size_kb": 0, 00:14:34.394 "state": "online", 00:14:34.394 "raid_level": "raid1", 00:14:34.394 "superblock": true, 00:14:34.394 "num_base_bdevs": 4, 00:14:34.394 "num_base_bdevs_discovered": 2, 00:14:34.394 "num_base_bdevs_operational": 2, 00:14:34.394 "base_bdevs_list": [ 00:14:34.394 { 00:14:34.394 "name": null, 00:14:34.394 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:34.394 "is_configured": false, 00:14:34.394 "data_offset": 0, 00:14:34.394 "data_size": 63488 00:14:34.394 }, 00:14:34.394 { 00:14:34.394 "name": null, 00:14:34.394 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:34.394 "is_configured": false, 00:14:34.394 "data_offset": 2048, 00:14:34.394 "data_size": 63488 00:14:34.394 }, 00:14:34.394 { 00:14:34.394 "name": "BaseBdev3", 00:14:34.394 "uuid": "77e4c29c-56d8-5f58-9526-3ceb892b92f6", 00:14:34.394 "is_configured": true, 00:14:34.394 "data_offset": 2048, 00:14:34.394 "data_size": 63488 00:14:34.394 }, 00:14:34.394 { 00:14:34.394 "name": "BaseBdev4", 00:14:34.394 "uuid": "e0965463-58d6-571b-9944-f188fa7fdbef", 00:14:34.394 "is_configured": true, 00:14:34.394 "data_offset": 2048, 00:14:34.394 "data_size": 63488 00:14:34.394 } 00:14:34.394 ] 00:14:34.394 }' 00:14:34.395 15:22:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:34.395 15:22:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:34.652 15:22:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:34.652 15:22:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:34.652 15:22:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:34.652 15:22:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:34.652 15:22:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:34.652 15:22:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:34.652 15:22:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.652 15:22:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:34.652 15:22:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:34.652 15:22:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.652 15:22:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:34.652 "name": "raid_bdev1", 00:14:34.652 "uuid": "447096cd-a629-4417-ac05-63744e3cd115", 00:14:34.652 "strip_size_kb": 0, 00:14:34.652 "state": "online", 00:14:34.652 "raid_level": "raid1", 00:14:34.652 "superblock": true, 00:14:34.652 "num_base_bdevs": 4, 00:14:34.652 "num_base_bdevs_discovered": 2, 00:14:34.652 "num_base_bdevs_operational": 2, 00:14:34.652 "base_bdevs_list": [ 00:14:34.652 { 00:14:34.652 "name": null, 00:14:34.652 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:34.652 "is_configured": false, 00:14:34.652 "data_offset": 0, 00:14:34.652 "data_size": 63488 00:14:34.652 }, 00:14:34.652 { 00:14:34.652 "name": null, 00:14:34.652 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:34.652 "is_configured": false, 00:14:34.652 "data_offset": 2048, 00:14:34.652 "data_size": 63488 00:14:34.652 }, 00:14:34.652 { 00:14:34.652 "name": "BaseBdev3", 00:14:34.652 "uuid": "77e4c29c-56d8-5f58-9526-3ceb892b92f6", 00:14:34.652 "is_configured": true, 00:14:34.652 "data_offset": 2048, 00:14:34.652 "data_size": 63488 00:14:34.652 }, 00:14:34.652 { 00:14:34.652 "name": "BaseBdev4", 00:14:34.652 "uuid": "e0965463-58d6-571b-9944-f188fa7fdbef", 00:14:34.652 "is_configured": true, 00:14:34.652 "data_offset": 2048, 00:14:34.652 "data_size": 63488 00:14:34.652 } 00:14:34.652 ] 00:14:34.652 }' 00:14:34.652 15:22:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:34.652 15:22:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:34.652 15:22:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:34.652 15:22:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:34.652 15:22:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:14:34.652 15:22:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.652 15:22:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:34.652 15:22:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.652 15:22:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:34.652 15:22:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.652 15:22:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:34.652 [2024-11-20 15:22:21.082829] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:34.652 [2024-11-20 15:22:21.082908] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:34.652 [2024-11-20 15:22:21.082932] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000cc80 00:14:34.652 [2024-11-20 15:22:21.082947] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:34.652 [2024-11-20 15:22:21.083450] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:34.652 [2024-11-20 15:22:21.083484] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:34.652 [2024-11-20 15:22:21.083578] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:14:34.652 [2024-11-20 15:22:21.083600] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:14:34.652 [2024-11-20 15:22:21.083611] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:34.652 [2024-11-20 15:22:21.083647] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:14:34.652 BaseBdev1 00:14:34.652 15:22:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.652 15:22:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:14:36.034 15:22:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:36.034 15:22:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:36.034 15:22:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:36.034 15:22:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:36.034 15:22:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:36.034 15:22:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:36.034 15:22:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:36.034 15:22:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:36.034 15:22:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:36.034 15:22:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:36.034 15:22:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:36.034 15:22:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:36.034 15:22:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.034 15:22:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:36.034 15:22:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.034 15:22:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:36.034 "name": "raid_bdev1", 00:14:36.034 "uuid": "447096cd-a629-4417-ac05-63744e3cd115", 00:14:36.034 "strip_size_kb": 0, 00:14:36.034 "state": "online", 00:14:36.034 "raid_level": "raid1", 00:14:36.034 "superblock": true, 00:14:36.034 "num_base_bdevs": 4, 00:14:36.034 "num_base_bdevs_discovered": 2, 00:14:36.034 "num_base_bdevs_operational": 2, 00:14:36.034 "base_bdevs_list": [ 00:14:36.034 { 00:14:36.034 "name": null, 00:14:36.034 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:36.034 "is_configured": false, 00:14:36.034 "data_offset": 0, 00:14:36.034 "data_size": 63488 00:14:36.034 }, 00:14:36.034 { 00:14:36.034 "name": null, 00:14:36.034 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:36.034 "is_configured": false, 00:14:36.034 "data_offset": 2048, 00:14:36.034 "data_size": 63488 00:14:36.034 }, 00:14:36.034 { 00:14:36.034 "name": "BaseBdev3", 00:14:36.034 "uuid": "77e4c29c-56d8-5f58-9526-3ceb892b92f6", 00:14:36.034 "is_configured": true, 00:14:36.034 "data_offset": 2048, 00:14:36.034 "data_size": 63488 00:14:36.034 }, 00:14:36.034 { 00:14:36.034 "name": "BaseBdev4", 00:14:36.034 "uuid": "e0965463-58d6-571b-9944-f188fa7fdbef", 00:14:36.034 "is_configured": true, 00:14:36.034 "data_offset": 2048, 00:14:36.034 "data_size": 63488 00:14:36.034 } 00:14:36.034 ] 00:14:36.034 }' 00:14:36.034 15:22:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:36.034 15:22:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:36.293 15:22:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:36.293 15:22:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:36.293 15:22:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:36.293 15:22:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:36.293 15:22:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:36.293 15:22:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:36.293 15:22:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.293 15:22:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:36.293 15:22:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:36.293 15:22:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.293 15:22:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:36.293 "name": "raid_bdev1", 00:14:36.293 "uuid": "447096cd-a629-4417-ac05-63744e3cd115", 00:14:36.293 "strip_size_kb": 0, 00:14:36.293 "state": "online", 00:14:36.293 "raid_level": "raid1", 00:14:36.293 "superblock": true, 00:14:36.293 "num_base_bdevs": 4, 00:14:36.293 "num_base_bdevs_discovered": 2, 00:14:36.293 "num_base_bdevs_operational": 2, 00:14:36.293 "base_bdevs_list": [ 00:14:36.293 { 00:14:36.293 "name": null, 00:14:36.293 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:36.293 "is_configured": false, 00:14:36.293 "data_offset": 0, 00:14:36.293 "data_size": 63488 00:14:36.293 }, 00:14:36.293 { 00:14:36.293 "name": null, 00:14:36.293 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:36.293 "is_configured": false, 00:14:36.293 "data_offset": 2048, 00:14:36.293 "data_size": 63488 00:14:36.293 }, 00:14:36.293 { 00:14:36.293 "name": "BaseBdev3", 00:14:36.293 "uuid": "77e4c29c-56d8-5f58-9526-3ceb892b92f6", 00:14:36.293 "is_configured": true, 00:14:36.293 "data_offset": 2048, 00:14:36.294 "data_size": 63488 00:14:36.294 }, 00:14:36.294 { 00:14:36.294 "name": "BaseBdev4", 00:14:36.294 "uuid": "e0965463-58d6-571b-9944-f188fa7fdbef", 00:14:36.294 "is_configured": true, 00:14:36.294 "data_offset": 2048, 00:14:36.294 "data_size": 63488 00:14:36.294 } 00:14:36.294 ] 00:14:36.294 }' 00:14:36.294 15:22:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:36.294 15:22:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:36.294 15:22:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:36.294 15:22:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:36.294 15:22:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:36.294 15:22:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # local es=0 00:14:36.294 15:22:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:36.294 15:22:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:14:36.294 15:22:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:36.294 15:22:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:14:36.294 15:22:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:36.294 15:22:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:36.294 15:22:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.294 15:22:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:36.294 [2024-11-20 15:22:22.658896] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:36.294 [2024-11-20 15:22:22.659076] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:14:36.294 [2024-11-20 15:22:22.659092] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:36.294 request: 00:14:36.294 { 00:14:36.294 "base_bdev": "BaseBdev1", 00:14:36.294 "raid_bdev": "raid_bdev1", 00:14:36.294 "method": "bdev_raid_add_base_bdev", 00:14:36.294 "req_id": 1 00:14:36.294 } 00:14:36.294 Got JSON-RPC error response 00:14:36.294 response: 00:14:36.294 { 00:14:36.294 "code": -22, 00:14:36.294 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:14:36.294 } 00:14:36.294 15:22:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:14:36.294 15:22:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # es=1 00:14:36.294 15:22:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:36.294 15:22:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:36.294 15:22:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:36.294 15:22:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:14:37.232 15:22:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:37.232 15:22:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:37.232 15:22:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:37.232 15:22:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:37.232 15:22:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:37.232 15:22:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:37.232 15:22:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:37.232 15:22:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:37.232 15:22:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:37.232 15:22:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:37.232 15:22:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:37.232 15:22:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:37.232 15:22:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.232 15:22:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:37.232 15:22:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.491 15:22:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:37.491 "name": "raid_bdev1", 00:14:37.491 "uuid": "447096cd-a629-4417-ac05-63744e3cd115", 00:14:37.491 "strip_size_kb": 0, 00:14:37.491 "state": "online", 00:14:37.491 "raid_level": "raid1", 00:14:37.491 "superblock": true, 00:14:37.491 "num_base_bdevs": 4, 00:14:37.491 "num_base_bdevs_discovered": 2, 00:14:37.491 "num_base_bdevs_operational": 2, 00:14:37.491 "base_bdevs_list": [ 00:14:37.491 { 00:14:37.491 "name": null, 00:14:37.491 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:37.491 "is_configured": false, 00:14:37.491 "data_offset": 0, 00:14:37.491 "data_size": 63488 00:14:37.491 }, 00:14:37.491 { 00:14:37.491 "name": null, 00:14:37.491 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:37.491 "is_configured": false, 00:14:37.491 "data_offset": 2048, 00:14:37.491 "data_size": 63488 00:14:37.491 }, 00:14:37.491 { 00:14:37.491 "name": "BaseBdev3", 00:14:37.491 "uuid": "77e4c29c-56d8-5f58-9526-3ceb892b92f6", 00:14:37.491 "is_configured": true, 00:14:37.491 "data_offset": 2048, 00:14:37.491 "data_size": 63488 00:14:37.491 }, 00:14:37.491 { 00:14:37.491 "name": "BaseBdev4", 00:14:37.491 "uuid": "e0965463-58d6-571b-9944-f188fa7fdbef", 00:14:37.491 "is_configured": true, 00:14:37.491 "data_offset": 2048, 00:14:37.491 "data_size": 63488 00:14:37.491 } 00:14:37.491 ] 00:14:37.491 }' 00:14:37.491 15:22:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:37.491 15:22:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:37.750 15:22:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:37.750 15:22:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:37.750 15:22:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:37.750 15:22:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:37.750 15:22:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:37.750 15:22:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:37.750 15:22:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:37.750 15:22:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.750 15:22:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:37.750 15:22:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.750 15:22:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:37.750 "name": "raid_bdev1", 00:14:37.750 "uuid": "447096cd-a629-4417-ac05-63744e3cd115", 00:14:37.750 "strip_size_kb": 0, 00:14:37.750 "state": "online", 00:14:37.750 "raid_level": "raid1", 00:14:37.750 "superblock": true, 00:14:37.750 "num_base_bdevs": 4, 00:14:37.750 "num_base_bdevs_discovered": 2, 00:14:37.750 "num_base_bdevs_operational": 2, 00:14:37.750 "base_bdevs_list": [ 00:14:37.750 { 00:14:37.750 "name": null, 00:14:37.750 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:37.750 "is_configured": false, 00:14:37.750 "data_offset": 0, 00:14:37.750 "data_size": 63488 00:14:37.750 }, 00:14:37.750 { 00:14:37.750 "name": null, 00:14:37.750 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:37.750 "is_configured": false, 00:14:37.750 "data_offset": 2048, 00:14:37.751 "data_size": 63488 00:14:37.751 }, 00:14:37.751 { 00:14:37.751 "name": "BaseBdev3", 00:14:37.751 "uuid": "77e4c29c-56d8-5f58-9526-3ceb892b92f6", 00:14:37.751 "is_configured": true, 00:14:37.751 "data_offset": 2048, 00:14:37.751 "data_size": 63488 00:14:37.751 }, 00:14:37.751 { 00:14:37.751 "name": "BaseBdev4", 00:14:37.751 "uuid": "e0965463-58d6-571b-9944-f188fa7fdbef", 00:14:37.751 "is_configured": true, 00:14:37.751 "data_offset": 2048, 00:14:37.751 "data_size": 63488 00:14:37.751 } 00:14:37.751 ] 00:14:37.751 }' 00:14:37.751 15:22:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:37.751 15:22:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:37.751 15:22:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:37.751 15:22:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:37.751 15:22:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 78984 00:14:37.751 15:22:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # '[' -z 78984 ']' 00:14:37.751 15:22:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # kill -0 78984 00:14:37.751 15:22:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # uname 00:14:37.751 15:22:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:37.751 15:22:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78984 00:14:38.011 15:22:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:38.011 killing process with pid 78984 00:14:38.011 15:22:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:38.011 15:22:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78984' 00:14:38.011 15:22:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@973 -- # kill 78984 00:14:38.011 Received shutdown signal, test time was about 17.981868 seconds 00:14:38.011 00:14:38.011 Latency(us) 00:14:38.011 [2024-11-20T15:22:24.493Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:38.011 [2024-11-20T15:22:24.493Z] =================================================================================================================== 00:14:38.011 [2024-11-20T15:22:24.493Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:38.011 [2024-11-20 15:22:24.244243] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:38.011 15:22:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@978 -- # wait 78984 00:14:38.011 [2024-11-20 15:22:24.244383] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:38.011 [2024-11-20 15:22:24.244455] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:38.011 [2024-11-20 15:22:24.244467] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:14:38.271 [2024-11-20 15:22:24.671286] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:39.649 15:22:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:14:39.649 00:14:39.649 real 0m21.536s 00:14:39.649 user 0m27.811s 00:14:39.649 sys 0m2.926s 00:14:39.649 ************************************ 00:14:39.649 END TEST raid_rebuild_test_sb_io 00:14:39.649 ************************************ 00:14:39.649 15:22:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:39.649 15:22:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:39.649 15:22:25 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:14:39.649 15:22:25 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 3 false 00:14:39.649 15:22:25 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:14:39.649 15:22:25 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:39.649 15:22:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:39.649 ************************************ 00:14:39.649 START TEST raid5f_state_function_test 00:14:39.649 ************************************ 00:14:39.649 15:22:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 3 false 00:14:39.649 15:22:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:14:39.649 15:22:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:14:39.649 15:22:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:14:39.649 15:22:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:14:39.649 15:22:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:14:39.649 15:22:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:39.649 15:22:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:14:39.649 15:22:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:39.649 15:22:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:39.649 15:22:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:14:39.649 15:22:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:39.649 15:22:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:39.649 15:22:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:14:39.649 15:22:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:39.649 15:22:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:39.649 15:22:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:14:39.649 15:22:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:14:39.649 15:22:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:14:39.649 15:22:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:14:39.649 15:22:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:14:39.649 15:22:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:14:39.649 15:22:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:14:39.649 15:22:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:14:39.649 15:22:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:14:39.649 15:22:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:14:39.649 15:22:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:14:39.649 15:22:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=79708 00:14:39.649 Process raid pid: 79708 00:14:39.649 15:22:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:14:39.649 15:22:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 79708' 00:14:39.649 15:22:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 79708 00:14:39.649 15:22:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 79708 ']' 00:14:39.649 15:22:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:39.649 15:22:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:39.649 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:39.649 15:22:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:39.649 15:22:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:39.649 15:22:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.649 [2024-11-20 15:22:26.061460] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:14:39.649 [2024-11-20 15:22:26.061598] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:39.936 [2024-11-20 15:22:26.247146] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:39.936 [2024-11-20 15:22:26.370601] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:40.208 [2024-11-20 15:22:26.568104] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:40.208 [2024-11-20 15:22:26.568160] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:40.789 15:22:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:40.789 15:22:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:14:40.789 15:22:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:40.789 15:22:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.789 15:22:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.789 [2024-11-20 15:22:26.987214] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:40.789 [2024-11-20 15:22:26.987281] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:40.789 [2024-11-20 15:22:26.987294] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:40.789 [2024-11-20 15:22:26.987308] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:40.789 [2024-11-20 15:22:26.987316] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:40.789 [2024-11-20 15:22:26.987328] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:40.789 15:22:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.789 15:22:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:40.789 15:22:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:40.789 15:22:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:40.789 15:22:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:40.789 15:22:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:40.790 15:22:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:40.790 15:22:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:40.790 15:22:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:40.790 15:22:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:40.790 15:22:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:40.790 15:22:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:40.790 15:22:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:40.790 15:22:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.790 15:22:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.790 15:22:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.790 15:22:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:40.790 "name": "Existed_Raid", 00:14:40.790 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:40.790 "strip_size_kb": 64, 00:14:40.790 "state": "configuring", 00:14:40.790 "raid_level": "raid5f", 00:14:40.790 "superblock": false, 00:14:40.790 "num_base_bdevs": 3, 00:14:40.790 "num_base_bdevs_discovered": 0, 00:14:40.790 "num_base_bdevs_operational": 3, 00:14:40.790 "base_bdevs_list": [ 00:14:40.790 { 00:14:40.790 "name": "BaseBdev1", 00:14:40.790 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:40.790 "is_configured": false, 00:14:40.790 "data_offset": 0, 00:14:40.790 "data_size": 0 00:14:40.790 }, 00:14:40.790 { 00:14:40.790 "name": "BaseBdev2", 00:14:40.790 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:40.790 "is_configured": false, 00:14:40.790 "data_offset": 0, 00:14:40.790 "data_size": 0 00:14:40.790 }, 00:14:40.790 { 00:14:40.790 "name": "BaseBdev3", 00:14:40.790 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:40.790 "is_configured": false, 00:14:40.790 "data_offset": 0, 00:14:40.790 "data_size": 0 00:14:40.790 } 00:14:40.790 ] 00:14:40.790 }' 00:14:40.790 15:22:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:40.790 15:22:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.049 15:22:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:41.049 15:22:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.049 15:22:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.049 [2024-11-20 15:22:27.434719] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:41.049 [2024-11-20 15:22:27.434769] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:14:41.049 15:22:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.049 15:22:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:41.049 15:22:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.049 15:22:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.049 [2024-11-20 15:22:27.446711] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:41.049 [2024-11-20 15:22:27.446792] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:41.049 [2024-11-20 15:22:27.446803] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:41.049 [2024-11-20 15:22:27.446817] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:41.049 [2024-11-20 15:22:27.446825] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:41.049 [2024-11-20 15:22:27.446838] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:41.049 15:22:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.049 15:22:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:41.049 15:22:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.049 15:22:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.049 [2024-11-20 15:22:27.497455] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:41.049 BaseBdev1 00:14:41.049 15:22:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.049 15:22:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:14:41.049 15:22:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:14:41.049 15:22:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:41.049 15:22:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:41.049 15:22:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:41.049 15:22:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:41.049 15:22:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:41.049 15:22:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.049 15:22:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.049 15:22:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.049 15:22:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:41.049 15:22:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.049 15:22:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.049 [ 00:14:41.049 { 00:14:41.049 "name": "BaseBdev1", 00:14:41.049 "aliases": [ 00:14:41.049 "e3eb061b-0bea-4958-8bf5-620046c446d2" 00:14:41.049 ], 00:14:41.049 "product_name": "Malloc disk", 00:14:41.049 "block_size": 512, 00:14:41.049 "num_blocks": 65536, 00:14:41.049 "uuid": "e3eb061b-0bea-4958-8bf5-620046c446d2", 00:14:41.049 "assigned_rate_limits": { 00:14:41.049 "rw_ios_per_sec": 0, 00:14:41.049 "rw_mbytes_per_sec": 0, 00:14:41.049 "r_mbytes_per_sec": 0, 00:14:41.049 "w_mbytes_per_sec": 0 00:14:41.049 }, 00:14:41.049 "claimed": true, 00:14:41.049 "claim_type": "exclusive_write", 00:14:41.049 "zoned": false, 00:14:41.049 "supported_io_types": { 00:14:41.308 "read": true, 00:14:41.308 "write": true, 00:14:41.308 "unmap": true, 00:14:41.308 "flush": true, 00:14:41.308 "reset": true, 00:14:41.308 "nvme_admin": false, 00:14:41.308 "nvme_io": false, 00:14:41.308 "nvme_io_md": false, 00:14:41.308 "write_zeroes": true, 00:14:41.308 "zcopy": true, 00:14:41.308 "get_zone_info": false, 00:14:41.308 "zone_management": false, 00:14:41.308 "zone_append": false, 00:14:41.308 "compare": false, 00:14:41.308 "compare_and_write": false, 00:14:41.308 "abort": true, 00:14:41.308 "seek_hole": false, 00:14:41.308 "seek_data": false, 00:14:41.308 "copy": true, 00:14:41.308 "nvme_iov_md": false 00:14:41.308 }, 00:14:41.308 "memory_domains": [ 00:14:41.308 { 00:14:41.308 "dma_device_id": "system", 00:14:41.308 "dma_device_type": 1 00:14:41.308 }, 00:14:41.308 { 00:14:41.308 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:41.308 "dma_device_type": 2 00:14:41.308 } 00:14:41.308 ], 00:14:41.308 "driver_specific": {} 00:14:41.308 } 00:14:41.308 ] 00:14:41.308 15:22:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.308 15:22:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:41.308 15:22:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:41.308 15:22:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:41.308 15:22:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:41.308 15:22:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:41.308 15:22:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:41.308 15:22:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:41.308 15:22:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:41.308 15:22:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:41.308 15:22:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:41.308 15:22:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:41.308 15:22:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:41.308 15:22:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:41.308 15:22:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.308 15:22:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.308 15:22:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.308 15:22:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:41.308 "name": "Existed_Raid", 00:14:41.308 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:41.308 "strip_size_kb": 64, 00:14:41.308 "state": "configuring", 00:14:41.308 "raid_level": "raid5f", 00:14:41.308 "superblock": false, 00:14:41.308 "num_base_bdevs": 3, 00:14:41.308 "num_base_bdevs_discovered": 1, 00:14:41.308 "num_base_bdevs_operational": 3, 00:14:41.308 "base_bdevs_list": [ 00:14:41.308 { 00:14:41.308 "name": "BaseBdev1", 00:14:41.308 "uuid": "e3eb061b-0bea-4958-8bf5-620046c446d2", 00:14:41.308 "is_configured": true, 00:14:41.308 "data_offset": 0, 00:14:41.308 "data_size": 65536 00:14:41.308 }, 00:14:41.308 { 00:14:41.308 "name": "BaseBdev2", 00:14:41.308 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:41.308 "is_configured": false, 00:14:41.308 "data_offset": 0, 00:14:41.308 "data_size": 0 00:14:41.308 }, 00:14:41.308 { 00:14:41.308 "name": "BaseBdev3", 00:14:41.308 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:41.308 "is_configured": false, 00:14:41.308 "data_offset": 0, 00:14:41.308 "data_size": 0 00:14:41.308 } 00:14:41.308 ] 00:14:41.308 }' 00:14:41.308 15:22:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:41.308 15:22:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.567 15:22:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:41.567 15:22:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.567 15:22:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.567 [2024-11-20 15:22:27.972854] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:41.567 [2024-11-20 15:22:27.972916] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:14:41.567 15:22:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.567 15:22:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:41.567 15:22:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.567 15:22:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.567 [2024-11-20 15:22:27.984908] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:41.567 [2024-11-20 15:22:27.987005] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:41.567 [2024-11-20 15:22:27.987060] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:41.567 [2024-11-20 15:22:27.987072] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:41.567 [2024-11-20 15:22:27.987084] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:41.567 15:22:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.567 15:22:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:14:41.567 15:22:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:41.567 15:22:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:41.567 15:22:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:41.567 15:22:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:41.567 15:22:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:41.567 15:22:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:41.567 15:22:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:41.567 15:22:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:41.567 15:22:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:41.567 15:22:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:41.567 15:22:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:41.567 15:22:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:41.567 15:22:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.568 15:22:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.568 15:22:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:41.568 15:22:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.568 15:22:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:41.568 "name": "Existed_Raid", 00:14:41.568 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:41.568 "strip_size_kb": 64, 00:14:41.568 "state": "configuring", 00:14:41.568 "raid_level": "raid5f", 00:14:41.568 "superblock": false, 00:14:41.568 "num_base_bdevs": 3, 00:14:41.568 "num_base_bdevs_discovered": 1, 00:14:41.568 "num_base_bdevs_operational": 3, 00:14:41.568 "base_bdevs_list": [ 00:14:41.568 { 00:14:41.568 "name": "BaseBdev1", 00:14:41.568 "uuid": "e3eb061b-0bea-4958-8bf5-620046c446d2", 00:14:41.568 "is_configured": true, 00:14:41.568 "data_offset": 0, 00:14:41.568 "data_size": 65536 00:14:41.568 }, 00:14:41.568 { 00:14:41.568 "name": "BaseBdev2", 00:14:41.568 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:41.568 "is_configured": false, 00:14:41.568 "data_offset": 0, 00:14:41.568 "data_size": 0 00:14:41.568 }, 00:14:41.568 { 00:14:41.568 "name": "BaseBdev3", 00:14:41.568 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:41.568 "is_configured": false, 00:14:41.568 "data_offset": 0, 00:14:41.568 "data_size": 0 00:14:41.568 } 00:14:41.568 ] 00:14:41.568 }' 00:14:41.568 15:22:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:41.568 15:22:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.135 15:22:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:42.135 15:22:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.135 15:22:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.135 [2024-11-20 15:22:28.483778] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:42.135 BaseBdev2 00:14:42.135 15:22:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.136 15:22:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:14:42.136 15:22:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:14:42.136 15:22:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:42.136 15:22:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:42.136 15:22:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:42.136 15:22:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:42.136 15:22:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:42.136 15:22:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.136 15:22:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.136 15:22:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.136 15:22:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:42.136 15:22:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.136 15:22:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.136 [ 00:14:42.136 { 00:14:42.136 "name": "BaseBdev2", 00:14:42.136 "aliases": [ 00:14:42.136 "41e440ff-7af6-4616-b724-fc384fdbae5f" 00:14:42.136 ], 00:14:42.136 "product_name": "Malloc disk", 00:14:42.136 "block_size": 512, 00:14:42.136 "num_blocks": 65536, 00:14:42.136 "uuid": "41e440ff-7af6-4616-b724-fc384fdbae5f", 00:14:42.136 "assigned_rate_limits": { 00:14:42.136 "rw_ios_per_sec": 0, 00:14:42.136 "rw_mbytes_per_sec": 0, 00:14:42.136 "r_mbytes_per_sec": 0, 00:14:42.136 "w_mbytes_per_sec": 0 00:14:42.136 }, 00:14:42.136 "claimed": true, 00:14:42.136 "claim_type": "exclusive_write", 00:14:42.136 "zoned": false, 00:14:42.136 "supported_io_types": { 00:14:42.136 "read": true, 00:14:42.136 "write": true, 00:14:42.136 "unmap": true, 00:14:42.136 "flush": true, 00:14:42.136 "reset": true, 00:14:42.136 "nvme_admin": false, 00:14:42.136 "nvme_io": false, 00:14:42.136 "nvme_io_md": false, 00:14:42.136 "write_zeroes": true, 00:14:42.136 "zcopy": true, 00:14:42.136 "get_zone_info": false, 00:14:42.136 "zone_management": false, 00:14:42.136 "zone_append": false, 00:14:42.136 "compare": false, 00:14:42.136 "compare_and_write": false, 00:14:42.136 "abort": true, 00:14:42.136 "seek_hole": false, 00:14:42.136 "seek_data": false, 00:14:42.136 "copy": true, 00:14:42.136 "nvme_iov_md": false 00:14:42.136 }, 00:14:42.136 "memory_domains": [ 00:14:42.136 { 00:14:42.136 "dma_device_id": "system", 00:14:42.136 "dma_device_type": 1 00:14:42.136 }, 00:14:42.136 { 00:14:42.136 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:42.136 "dma_device_type": 2 00:14:42.136 } 00:14:42.136 ], 00:14:42.136 "driver_specific": {} 00:14:42.136 } 00:14:42.136 ] 00:14:42.136 15:22:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.136 15:22:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:42.136 15:22:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:42.136 15:22:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:42.136 15:22:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:42.136 15:22:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:42.136 15:22:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:42.136 15:22:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:42.136 15:22:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:42.136 15:22:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:42.136 15:22:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:42.136 15:22:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:42.136 15:22:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:42.136 15:22:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:42.136 15:22:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:42.136 15:22:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.136 15:22:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.136 15:22:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:42.136 15:22:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.136 15:22:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:42.136 "name": "Existed_Raid", 00:14:42.136 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:42.136 "strip_size_kb": 64, 00:14:42.136 "state": "configuring", 00:14:42.136 "raid_level": "raid5f", 00:14:42.136 "superblock": false, 00:14:42.136 "num_base_bdevs": 3, 00:14:42.136 "num_base_bdevs_discovered": 2, 00:14:42.136 "num_base_bdevs_operational": 3, 00:14:42.136 "base_bdevs_list": [ 00:14:42.136 { 00:14:42.136 "name": "BaseBdev1", 00:14:42.136 "uuid": "e3eb061b-0bea-4958-8bf5-620046c446d2", 00:14:42.136 "is_configured": true, 00:14:42.136 "data_offset": 0, 00:14:42.136 "data_size": 65536 00:14:42.136 }, 00:14:42.136 { 00:14:42.136 "name": "BaseBdev2", 00:14:42.136 "uuid": "41e440ff-7af6-4616-b724-fc384fdbae5f", 00:14:42.136 "is_configured": true, 00:14:42.136 "data_offset": 0, 00:14:42.136 "data_size": 65536 00:14:42.136 }, 00:14:42.136 { 00:14:42.136 "name": "BaseBdev3", 00:14:42.136 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:42.136 "is_configured": false, 00:14:42.136 "data_offset": 0, 00:14:42.136 "data_size": 0 00:14:42.136 } 00:14:42.136 ] 00:14:42.136 }' 00:14:42.136 15:22:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:42.136 15:22:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.703 15:22:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:42.703 15:22:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.703 15:22:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.703 [2024-11-20 15:22:29.028797] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:42.703 [2024-11-20 15:22:29.028878] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:42.703 [2024-11-20 15:22:29.028895] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:14:42.704 [2024-11-20 15:22:29.029172] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:14:42.704 [2024-11-20 15:22:29.035021] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:42.704 [2024-11-20 15:22:29.035056] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:14:42.704 [2024-11-20 15:22:29.035347] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:42.704 BaseBdev3 00:14:42.704 15:22:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.704 15:22:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:14:42.704 15:22:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:14:42.704 15:22:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:42.704 15:22:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:42.704 15:22:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:42.704 15:22:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:42.704 15:22:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:42.704 15:22:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.704 15:22:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.704 15:22:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.704 15:22:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:42.704 15:22:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.704 15:22:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.704 [ 00:14:42.704 { 00:14:42.704 "name": "BaseBdev3", 00:14:42.704 "aliases": [ 00:14:42.704 "dda33cc6-a8b7-4f22-866e-f234e1667649" 00:14:42.704 ], 00:14:42.704 "product_name": "Malloc disk", 00:14:42.704 "block_size": 512, 00:14:42.704 "num_blocks": 65536, 00:14:42.704 "uuid": "dda33cc6-a8b7-4f22-866e-f234e1667649", 00:14:42.704 "assigned_rate_limits": { 00:14:42.704 "rw_ios_per_sec": 0, 00:14:42.704 "rw_mbytes_per_sec": 0, 00:14:42.704 "r_mbytes_per_sec": 0, 00:14:42.704 "w_mbytes_per_sec": 0 00:14:42.704 }, 00:14:42.704 "claimed": true, 00:14:42.704 "claim_type": "exclusive_write", 00:14:42.704 "zoned": false, 00:14:42.704 "supported_io_types": { 00:14:42.704 "read": true, 00:14:42.704 "write": true, 00:14:42.704 "unmap": true, 00:14:42.704 "flush": true, 00:14:42.704 "reset": true, 00:14:42.704 "nvme_admin": false, 00:14:42.704 "nvme_io": false, 00:14:42.704 "nvme_io_md": false, 00:14:42.704 "write_zeroes": true, 00:14:42.704 "zcopy": true, 00:14:42.704 "get_zone_info": false, 00:14:42.704 "zone_management": false, 00:14:42.704 "zone_append": false, 00:14:42.704 "compare": false, 00:14:42.704 "compare_and_write": false, 00:14:42.704 "abort": true, 00:14:42.704 "seek_hole": false, 00:14:42.704 "seek_data": false, 00:14:42.704 "copy": true, 00:14:42.704 "nvme_iov_md": false 00:14:42.704 }, 00:14:42.704 "memory_domains": [ 00:14:42.704 { 00:14:42.704 "dma_device_id": "system", 00:14:42.704 "dma_device_type": 1 00:14:42.704 }, 00:14:42.704 { 00:14:42.704 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:42.704 "dma_device_type": 2 00:14:42.704 } 00:14:42.704 ], 00:14:42.704 "driver_specific": {} 00:14:42.704 } 00:14:42.704 ] 00:14:42.704 15:22:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.704 15:22:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:42.704 15:22:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:42.704 15:22:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:42.704 15:22:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:14:42.704 15:22:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:42.704 15:22:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:42.704 15:22:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:42.704 15:22:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:42.704 15:22:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:42.704 15:22:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:42.704 15:22:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:42.704 15:22:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:42.704 15:22:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:42.704 15:22:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:42.704 15:22:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:42.704 15:22:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.704 15:22:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.704 15:22:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.704 15:22:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:42.704 "name": "Existed_Raid", 00:14:42.704 "uuid": "597919e2-b06d-425d-b229-ecee589a6c23", 00:14:42.704 "strip_size_kb": 64, 00:14:42.704 "state": "online", 00:14:42.704 "raid_level": "raid5f", 00:14:42.704 "superblock": false, 00:14:42.704 "num_base_bdevs": 3, 00:14:42.704 "num_base_bdevs_discovered": 3, 00:14:42.704 "num_base_bdevs_operational": 3, 00:14:42.704 "base_bdevs_list": [ 00:14:42.704 { 00:14:42.704 "name": "BaseBdev1", 00:14:42.704 "uuid": "e3eb061b-0bea-4958-8bf5-620046c446d2", 00:14:42.704 "is_configured": true, 00:14:42.704 "data_offset": 0, 00:14:42.704 "data_size": 65536 00:14:42.704 }, 00:14:42.704 { 00:14:42.704 "name": "BaseBdev2", 00:14:42.704 "uuid": "41e440ff-7af6-4616-b724-fc384fdbae5f", 00:14:42.704 "is_configured": true, 00:14:42.704 "data_offset": 0, 00:14:42.704 "data_size": 65536 00:14:42.704 }, 00:14:42.704 { 00:14:42.704 "name": "BaseBdev3", 00:14:42.704 "uuid": "dda33cc6-a8b7-4f22-866e-f234e1667649", 00:14:42.704 "is_configured": true, 00:14:42.704 "data_offset": 0, 00:14:42.704 "data_size": 65536 00:14:42.704 } 00:14:42.704 ] 00:14:42.704 }' 00:14:42.704 15:22:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:42.704 15:22:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.271 15:22:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:14:43.271 15:22:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:43.271 15:22:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:43.271 15:22:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:43.271 15:22:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:43.271 15:22:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:43.271 15:22:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:43.271 15:22:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:43.271 15:22:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.271 15:22:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.271 [2024-11-20 15:22:29.509405] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:43.271 15:22:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.271 15:22:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:43.271 "name": "Existed_Raid", 00:14:43.271 "aliases": [ 00:14:43.271 "597919e2-b06d-425d-b229-ecee589a6c23" 00:14:43.271 ], 00:14:43.271 "product_name": "Raid Volume", 00:14:43.271 "block_size": 512, 00:14:43.271 "num_blocks": 131072, 00:14:43.271 "uuid": "597919e2-b06d-425d-b229-ecee589a6c23", 00:14:43.271 "assigned_rate_limits": { 00:14:43.271 "rw_ios_per_sec": 0, 00:14:43.271 "rw_mbytes_per_sec": 0, 00:14:43.271 "r_mbytes_per_sec": 0, 00:14:43.271 "w_mbytes_per_sec": 0 00:14:43.271 }, 00:14:43.271 "claimed": false, 00:14:43.271 "zoned": false, 00:14:43.271 "supported_io_types": { 00:14:43.271 "read": true, 00:14:43.271 "write": true, 00:14:43.271 "unmap": false, 00:14:43.271 "flush": false, 00:14:43.271 "reset": true, 00:14:43.271 "nvme_admin": false, 00:14:43.271 "nvme_io": false, 00:14:43.271 "nvme_io_md": false, 00:14:43.271 "write_zeroes": true, 00:14:43.271 "zcopy": false, 00:14:43.271 "get_zone_info": false, 00:14:43.271 "zone_management": false, 00:14:43.271 "zone_append": false, 00:14:43.271 "compare": false, 00:14:43.271 "compare_and_write": false, 00:14:43.271 "abort": false, 00:14:43.271 "seek_hole": false, 00:14:43.271 "seek_data": false, 00:14:43.271 "copy": false, 00:14:43.271 "nvme_iov_md": false 00:14:43.271 }, 00:14:43.271 "driver_specific": { 00:14:43.271 "raid": { 00:14:43.271 "uuid": "597919e2-b06d-425d-b229-ecee589a6c23", 00:14:43.271 "strip_size_kb": 64, 00:14:43.271 "state": "online", 00:14:43.271 "raid_level": "raid5f", 00:14:43.271 "superblock": false, 00:14:43.271 "num_base_bdevs": 3, 00:14:43.271 "num_base_bdevs_discovered": 3, 00:14:43.271 "num_base_bdevs_operational": 3, 00:14:43.271 "base_bdevs_list": [ 00:14:43.271 { 00:14:43.271 "name": "BaseBdev1", 00:14:43.271 "uuid": "e3eb061b-0bea-4958-8bf5-620046c446d2", 00:14:43.271 "is_configured": true, 00:14:43.271 "data_offset": 0, 00:14:43.271 "data_size": 65536 00:14:43.272 }, 00:14:43.272 { 00:14:43.272 "name": "BaseBdev2", 00:14:43.272 "uuid": "41e440ff-7af6-4616-b724-fc384fdbae5f", 00:14:43.272 "is_configured": true, 00:14:43.272 "data_offset": 0, 00:14:43.272 "data_size": 65536 00:14:43.272 }, 00:14:43.272 { 00:14:43.272 "name": "BaseBdev3", 00:14:43.272 "uuid": "dda33cc6-a8b7-4f22-866e-f234e1667649", 00:14:43.272 "is_configured": true, 00:14:43.272 "data_offset": 0, 00:14:43.272 "data_size": 65536 00:14:43.272 } 00:14:43.272 ] 00:14:43.272 } 00:14:43.272 } 00:14:43.272 }' 00:14:43.272 15:22:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:43.272 15:22:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:14:43.272 BaseBdev2 00:14:43.272 BaseBdev3' 00:14:43.272 15:22:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:43.272 15:22:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:43.272 15:22:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:43.272 15:22:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:43.272 15:22:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:14:43.272 15:22:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.272 15:22:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.272 15:22:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.272 15:22:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:43.272 15:22:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:43.272 15:22:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:43.272 15:22:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:43.272 15:22:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.272 15:22:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.272 15:22:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:43.272 15:22:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.272 15:22:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:43.272 15:22:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:43.272 15:22:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:43.272 15:22:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:43.272 15:22:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:43.272 15:22:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.272 15:22:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.272 15:22:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.531 15:22:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:43.531 15:22:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:43.531 15:22:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:43.531 15:22:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.531 15:22:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.531 [2024-11-20 15:22:29.760886] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:43.531 15:22:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.531 15:22:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:14:43.531 15:22:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:14:43.531 15:22:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:43.531 15:22:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:14:43.531 15:22:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:14:43.531 15:22:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:14:43.531 15:22:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:43.531 15:22:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:43.531 15:22:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:43.531 15:22:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:43.531 15:22:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:43.531 15:22:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:43.531 15:22:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:43.531 15:22:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:43.531 15:22:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:43.531 15:22:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:43.531 15:22:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:43.531 15:22:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.531 15:22:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.531 15:22:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.531 15:22:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:43.531 "name": "Existed_Raid", 00:14:43.531 "uuid": "597919e2-b06d-425d-b229-ecee589a6c23", 00:14:43.531 "strip_size_kb": 64, 00:14:43.531 "state": "online", 00:14:43.531 "raid_level": "raid5f", 00:14:43.531 "superblock": false, 00:14:43.531 "num_base_bdevs": 3, 00:14:43.531 "num_base_bdevs_discovered": 2, 00:14:43.531 "num_base_bdevs_operational": 2, 00:14:43.531 "base_bdevs_list": [ 00:14:43.531 { 00:14:43.531 "name": null, 00:14:43.531 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:43.531 "is_configured": false, 00:14:43.531 "data_offset": 0, 00:14:43.531 "data_size": 65536 00:14:43.531 }, 00:14:43.531 { 00:14:43.531 "name": "BaseBdev2", 00:14:43.531 "uuid": "41e440ff-7af6-4616-b724-fc384fdbae5f", 00:14:43.531 "is_configured": true, 00:14:43.531 "data_offset": 0, 00:14:43.531 "data_size": 65536 00:14:43.531 }, 00:14:43.531 { 00:14:43.531 "name": "BaseBdev3", 00:14:43.531 "uuid": "dda33cc6-a8b7-4f22-866e-f234e1667649", 00:14:43.531 "is_configured": true, 00:14:43.531 "data_offset": 0, 00:14:43.531 "data_size": 65536 00:14:43.531 } 00:14:43.531 ] 00:14:43.531 }' 00:14:43.531 15:22:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:43.531 15:22:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.099 15:22:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:14:44.099 15:22:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:44.099 15:22:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:44.099 15:22:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.099 15:22:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.099 15:22:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:44.099 15:22:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.099 15:22:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:44.099 15:22:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:44.099 15:22:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:14:44.099 15:22:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.099 15:22:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.099 [2024-11-20 15:22:30.329405] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:44.099 [2024-11-20 15:22:30.329513] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:44.099 [2024-11-20 15:22:30.425995] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:44.099 15:22:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.099 15:22:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:44.099 15:22:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:44.099 15:22:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:44.099 15:22:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:44.099 15:22:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.099 15:22:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.099 15:22:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.099 15:22:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:44.100 15:22:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:44.100 15:22:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:14:44.100 15:22:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.100 15:22:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.100 [2024-11-20 15:22:30.481998] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:44.100 [2024-11-20 15:22:30.482063] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:14:44.359 15:22:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.359 15:22:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:44.359 15:22:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:44.359 15:22:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:44.359 15:22:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.359 15:22:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.359 15:22:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:14:44.359 15:22:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.359 15:22:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:14:44.359 15:22:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:14:44.359 15:22:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:14:44.359 15:22:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:14:44.359 15:22:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:44.359 15:22:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:44.359 15:22:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.359 15:22:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.359 BaseBdev2 00:14:44.359 15:22:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.359 15:22:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:14:44.359 15:22:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:14:44.359 15:22:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:44.359 15:22:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:44.359 15:22:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:44.359 15:22:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:44.359 15:22:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:44.359 15:22:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.359 15:22:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.359 15:22:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.359 15:22:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:44.359 15:22:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.359 15:22:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.359 [ 00:14:44.359 { 00:14:44.359 "name": "BaseBdev2", 00:14:44.359 "aliases": [ 00:14:44.359 "3c467d69-a665-41d2-b11f-5a6e49c9499a" 00:14:44.359 ], 00:14:44.359 "product_name": "Malloc disk", 00:14:44.359 "block_size": 512, 00:14:44.359 "num_blocks": 65536, 00:14:44.359 "uuid": "3c467d69-a665-41d2-b11f-5a6e49c9499a", 00:14:44.359 "assigned_rate_limits": { 00:14:44.359 "rw_ios_per_sec": 0, 00:14:44.359 "rw_mbytes_per_sec": 0, 00:14:44.359 "r_mbytes_per_sec": 0, 00:14:44.359 "w_mbytes_per_sec": 0 00:14:44.359 }, 00:14:44.359 "claimed": false, 00:14:44.359 "zoned": false, 00:14:44.359 "supported_io_types": { 00:14:44.359 "read": true, 00:14:44.359 "write": true, 00:14:44.359 "unmap": true, 00:14:44.359 "flush": true, 00:14:44.359 "reset": true, 00:14:44.359 "nvme_admin": false, 00:14:44.359 "nvme_io": false, 00:14:44.359 "nvme_io_md": false, 00:14:44.359 "write_zeroes": true, 00:14:44.359 "zcopy": true, 00:14:44.359 "get_zone_info": false, 00:14:44.359 "zone_management": false, 00:14:44.359 "zone_append": false, 00:14:44.359 "compare": false, 00:14:44.359 "compare_and_write": false, 00:14:44.359 "abort": true, 00:14:44.359 "seek_hole": false, 00:14:44.359 "seek_data": false, 00:14:44.359 "copy": true, 00:14:44.359 "nvme_iov_md": false 00:14:44.359 }, 00:14:44.359 "memory_domains": [ 00:14:44.359 { 00:14:44.359 "dma_device_id": "system", 00:14:44.359 "dma_device_type": 1 00:14:44.359 }, 00:14:44.359 { 00:14:44.359 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:44.359 "dma_device_type": 2 00:14:44.359 } 00:14:44.359 ], 00:14:44.359 "driver_specific": {} 00:14:44.359 } 00:14:44.359 ] 00:14:44.359 15:22:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.359 15:22:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:44.359 15:22:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:44.359 15:22:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:44.359 15:22:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:44.359 15:22:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.359 15:22:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.359 BaseBdev3 00:14:44.359 15:22:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.359 15:22:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:14:44.359 15:22:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:14:44.359 15:22:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:44.359 15:22:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:44.359 15:22:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:44.359 15:22:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:44.359 15:22:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:44.359 15:22:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.359 15:22:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.359 15:22:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.359 15:22:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:44.359 15:22:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.359 15:22:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.359 [ 00:14:44.359 { 00:14:44.359 "name": "BaseBdev3", 00:14:44.359 "aliases": [ 00:14:44.359 "9537c6e2-419e-48f0-be69-b4fa78e1733a" 00:14:44.359 ], 00:14:44.359 "product_name": "Malloc disk", 00:14:44.359 "block_size": 512, 00:14:44.359 "num_blocks": 65536, 00:14:44.359 "uuid": "9537c6e2-419e-48f0-be69-b4fa78e1733a", 00:14:44.359 "assigned_rate_limits": { 00:14:44.359 "rw_ios_per_sec": 0, 00:14:44.359 "rw_mbytes_per_sec": 0, 00:14:44.359 "r_mbytes_per_sec": 0, 00:14:44.359 "w_mbytes_per_sec": 0 00:14:44.359 }, 00:14:44.359 "claimed": false, 00:14:44.359 "zoned": false, 00:14:44.359 "supported_io_types": { 00:14:44.359 "read": true, 00:14:44.359 "write": true, 00:14:44.359 "unmap": true, 00:14:44.359 "flush": true, 00:14:44.359 "reset": true, 00:14:44.359 "nvme_admin": false, 00:14:44.359 "nvme_io": false, 00:14:44.359 "nvme_io_md": false, 00:14:44.359 "write_zeroes": true, 00:14:44.359 "zcopy": true, 00:14:44.359 "get_zone_info": false, 00:14:44.359 "zone_management": false, 00:14:44.359 "zone_append": false, 00:14:44.359 "compare": false, 00:14:44.359 "compare_and_write": false, 00:14:44.359 "abort": true, 00:14:44.359 "seek_hole": false, 00:14:44.359 "seek_data": false, 00:14:44.359 "copy": true, 00:14:44.359 "nvme_iov_md": false 00:14:44.359 }, 00:14:44.359 "memory_domains": [ 00:14:44.359 { 00:14:44.359 "dma_device_id": "system", 00:14:44.359 "dma_device_type": 1 00:14:44.359 }, 00:14:44.359 { 00:14:44.359 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:44.359 "dma_device_type": 2 00:14:44.359 } 00:14:44.359 ], 00:14:44.359 "driver_specific": {} 00:14:44.359 } 00:14:44.359 ] 00:14:44.359 15:22:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.359 15:22:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:44.359 15:22:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:44.359 15:22:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:44.360 15:22:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:44.360 15:22:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.360 15:22:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.360 [2024-11-20 15:22:30.808352] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:44.360 [2024-11-20 15:22:30.808418] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:44.360 [2024-11-20 15:22:30.808449] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:44.360 [2024-11-20 15:22:30.810668] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:44.360 15:22:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.360 15:22:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:44.360 15:22:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:44.360 15:22:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:44.360 15:22:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:44.360 15:22:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:44.360 15:22:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:44.360 15:22:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:44.360 15:22:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:44.360 15:22:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:44.360 15:22:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:44.360 15:22:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:44.360 15:22:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:44.360 15:22:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.360 15:22:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.619 15:22:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.619 15:22:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:44.619 "name": "Existed_Raid", 00:14:44.619 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:44.619 "strip_size_kb": 64, 00:14:44.619 "state": "configuring", 00:14:44.619 "raid_level": "raid5f", 00:14:44.619 "superblock": false, 00:14:44.619 "num_base_bdevs": 3, 00:14:44.619 "num_base_bdevs_discovered": 2, 00:14:44.619 "num_base_bdevs_operational": 3, 00:14:44.619 "base_bdevs_list": [ 00:14:44.619 { 00:14:44.619 "name": "BaseBdev1", 00:14:44.619 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:44.619 "is_configured": false, 00:14:44.619 "data_offset": 0, 00:14:44.619 "data_size": 0 00:14:44.619 }, 00:14:44.619 { 00:14:44.619 "name": "BaseBdev2", 00:14:44.619 "uuid": "3c467d69-a665-41d2-b11f-5a6e49c9499a", 00:14:44.619 "is_configured": true, 00:14:44.619 "data_offset": 0, 00:14:44.619 "data_size": 65536 00:14:44.619 }, 00:14:44.619 { 00:14:44.619 "name": "BaseBdev3", 00:14:44.619 "uuid": "9537c6e2-419e-48f0-be69-b4fa78e1733a", 00:14:44.619 "is_configured": true, 00:14:44.619 "data_offset": 0, 00:14:44.619 "data_size": 65536 00:14:44.619 } 00:14:44.619 ] 00:14:44.619 }' 00:14:44.619 15:22:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:44.619 15:22:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.879 15:22:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:44.879 15:22:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.879 15:22:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.879 [2024-11-20 15:22:31.267768] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:44.879 15:22:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.879 15:22:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:44.879 15:22:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:44.879 15:22:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:44.879 15:22:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:44.879 15:22:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:44.879 15:22:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:44.879 15:22:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:44.879 15:22:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:44.879 15:22:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:44.879 15:22:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:44.879 15:22:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:44.879 15:22:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.879 15:22:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.879 15:22:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:44.879 15:22:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.879 15:22:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:44.879 "name": "Existed_Raid", 00:14:44.879 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:44.879 "strip_size_kb": 64, 00:14:44.879 "state": "configuring", 00:14:44.879 "raid_level": "raid5f", 00:14:44.879 "superblock": false, 00:14:44.879 "num_base_bdevs": 3, 00:14:44.879 "num_base_bdevs_discovered": 1, 00:14:44.879 "num_base_bdevs_operational": 3, 00:14:44.879 "base_bdevs_list": [ 00:14:44.879 { 00:14:44.879 "name": "BaseBdev1", 00:14:44.879 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:44.879 "is_configured": false, 00:14:44.879 "data_offset": 0, 00:14:44.879 "data_size": 0 00:14:44.879 }, 00:14:44.879 { 00:14:44.879 "name": null, 00:14:44.879 "uuid": "3c467d69-a665-41d2-b11f-5a6e49c9499a", 00:14:44.879 "is_configured": false, 00:14:44.879 "data_offset": 0, 00:14:44.879 "data_size": 65536 00:14:44.879 }, 00:14:44.879 { 00:14:44.879 "name": "BaseBdev3", 00:14:44.879 "uuid": "9537c6e2-419e-48f0-be69-b4fa78e1733a", 00:14:44.879 "is_configured": true, 00:14:44.879 "data_offset": 0, 00:14:44.879 "data_size": 65536 00:14:44.879 } 00:14:44.879 ] 00:14:44.879 }' 00:14:44.879 15:22:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:44.879 15:22:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.448 15:22:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:45.448 15:22:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.448 15:22:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.448 15:22:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:45.448 15:22:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.448 15:22:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:14:45.448 15:22:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:45.448 15:22:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.448 15:22:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.448 [2024-11-20 15:22:31.773048] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:45.448 BaseBdev1 00:14:45.448 15:22:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.448 15:22:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:14:45.448 15:22:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:14:45.448 15:22:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:45.448 15:22:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:45.449 15:22:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:45.449 15:22:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:45.449 15:22:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:45.449 15:22:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.449 15:22:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.449 15:22:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.449 15:22:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:45.449 15:22:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.449 15:22:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.449 [ 00:14:45.449 { 00:14:45.449 "name": "BaseBdev1", 00:14:45.449 "aliases": [ 00:14:45.449 "c5b8e1b9-1c85-45f0-8ecc-2bc7807839b7" 00:14:45.449 ], 00:14:45.449 "product_name": "Malloc disk", 00:14:45.449 "block_size": 512, 00:14:45.449 "num_blocks": 65536, 00:14:45.449 "uuid": "c5b8e1b9-1c85-45f0-8ecc-2bc7807839b7", 00:14:45.449 "assigned_rate_limits": { 00:14:45.449 "rw_ios_per_sec": 0, 00:14:45.449 "rw_mbytes_per_sec": 0, 00:14:45.449 "r_mbytes_per_sec": 0, 00:14:45.449 "w_mbytes_per_sec": 0 00:14:45.449 }, 00:14:45.449 "claimed": true, 00:14:45.449 "claim_type": "exclusive_write", 00:14:45.449 "zoned": false, 00:14:45.449 "supported_io_types": { 00:14:45.449 "read": true, 00:14:45.449 "write": true, 00:14:45.449 "unmap": true, 00:14:45.449 "flush": true, 00:14:45.449 "reset": true, 00:14:45.449 "nvme_admin": false, 00:14:45.449 "nvme_io": false, 00:14:45.449 "nvme_io_md": false, 00:14:45.449 "write_zeroes": true, 00:14:45.449 "zcopy": true, 00:14:45.449 "get_zone_info": false, 00:14:45.449 "zone_management": false, 00:14:45.449 "zone_append": false, 00:14:45.449 "compare": false, 00:14:45.449 "compare_and_write": false, 00:14:45.449 "abort": true, 00:14:45.449 "seek_hole": false, 00:14:45.449 "seek_data": false, 00:14:45.449 "copy": true, 00:14:45.449 "nvme_iov_md": false 00:14:45.449 }, 00:14:45.449 "memory_domains": [ 00:14:45.449 { 00:14:45.449 "dma_device_id": "system", 00:14:45.449 "dma_device_type": 1 00:14:45.449 }, 00:14:45.449 { 00:14:45.449 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:45.449 "dma_device_type": 2 00:14:45.449 } 00:14:45.449 ], 00:14:45.449 "driver_specific": {} 00:14:45.449 } 00:14:45.449 ] 00:14:45.449 15:22:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.449 15:22:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:45.449 15:22:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:45.449 15:22:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:45.449 15:22:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:45.449 15:22:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:45.449 15:22:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:45.449 15:22:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:45.449 15:22:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:45.449 15:22:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:45.449 15:22:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:45.449 15:22:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:45.449 15:22:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:45.449 15:22:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:45.449 15:22:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.449 15:22:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.449 15:22:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.449 15:22:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:45.449 "name": "Existed_Raid", 00:14:45.449 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:45.449 "strip_size_kb": 64, 00:14:45.449 "state": "configuring", 00:14:45.449 "raid_level": "raid5f", 00:14:45.449 "superblock": false, 00:14:45.449 "num_base_bdevs": 3, 00:14:45.449 "num_base_bdevs_discovered": 2, 00:14:45.449 "num_base_bdevs_operational": 3, 00:14:45.449 "base_bdevs_list": [ 00:14:45.449 { 00:14:45.449 "name": "BaseBdev1", 00:14:45.449 "uuid": "c5b8e1b9-1c85-45f0-8ecc-2bc7807839b7", 00:14:45.449 "is_configured": true, 00:14:45.449 "data_offset": 0, 00:14:45.449 "data_size": 65536 00:14:45.449 }, 00:14:45.449 { 00:14:45.449 "name": null, 00:14:45.449 "uuid": "3c467d69-a665-41d2-b11f-5a6e49c9499a", 00:14:45.449 "is_configured": false, 00:14:45.449 "data_offset": 0, 00:14:45.449 "data_size": 65536 00:14:45.449 }, 00:14:45.449 { 00:14:45.449 "name": "BaseBdev3", 00:14:45.449 "uuid": "9537c6e2-419e-48f0-be69-b4fa78e1733a", 00:14:45.449 "is_configured": true, 00:14:45.449 "data_offset": 0, 00:14:45.449 "data_size": 65536 00:14:45.449 } 00:14:45.449 ] 00:14:45.449 }' 00:14:45.449 15:22:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:45.449 15:22:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.089 15:22:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:46.089 15:22:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:46.089 15:22:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.089 15:22:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.089 15:22:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.089 15:22:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:14:46.089 15:22:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:14:46.089 15:22:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.089 15:22:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.089 [2024-11-20 15:22:32.328387] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:46.089 15:22:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.089 15:22:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:46.089 15:22:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:46.089 15:22:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:46.089 15:22:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:46.089 15:22:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:46.089 15:22:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:46.089 15:22:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:46.090 15:22:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:46.090 15:22:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:46.090 15:22:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:46.090 15:22:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:46.090 15:22:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.090 15:22:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.090 15:22:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:46.090 15:22:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.090 15:22:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:46.090 "name": "Existed_Raid", 00:14:46.090 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:46.090 "strip_size_kb": 64, 00:14:46.090 "state": "configuring", 00:14:46.090 "raid_level": "raid5f", 00:14:46.090 "superblock": false, 00:14:46.090 "num_base_bdevs": 3, 00:14:46.090 "num_base_bdevs_discovered": 1, 00:14:46.090 "num_base_bdevs_operational": 3, 00:14:46.090 "base_bdevs_list": [ 00:14:46.090 { 00:14:46.090 "name": "BaseBdev1", 00:14:46.090 "uuid": "c5b8e1b9-1c85-45f0-8ecc-2bc7807839b7", 00:14:46.090 "is_configured": true, 00:14:46.090 "data_offset": 0, 00:14:46.090 "data_size": 65536 00:14:46.090 }, 00:14:46.090 { 00:14:46.090 "name": null, 00:14:46.090 "uuid": "3c467d69-a665-41d2-b11f-5a6e49c9499a", 00:14:46.090 "is_configured": false, 00:14:46.090 "data_offset": 0, 00:14:46.090 "data_size": 65536 00:14:46.090 }, 00:14:46.090 { 00:14:46.090 "name": null, 00:14:46.090 "uuid": "9537c6e2-419e-48f0-be69-b4fa78e1733a", 00:14:46.090 "is_configured": false, 00:14:46.090 "data_offset": 0, 00:14:46.090 "data_size": 65536 00:14:46.090 } 00:14:46.090 ] 00:14:46.090 }' 00:14:46.090 15:22:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:46.090 15:22:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.349 15:22:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:46.349 15:22:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.349 15:22:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.349 15:22:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:46.349 15:22:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.349 15:22:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:14:46.349 15:22:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:14:46.349 15:22:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.349 15:22:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.349 [2024-11-20 15:22:32.815771] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:46.349 15:22:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.349 15:22:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:46.349 15:22:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:46.349 15:22:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:46.349 15:22:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:46.349 15:22:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:46.349 15:22:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:46.349 15:22:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:46.349 15:22:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:46.349 15:22:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:46.349 15:22:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:46.349 15:22:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:46.349 15:22:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.349 15:22:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.608 15:22:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:46.608 15:22:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.608 15:22:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:46.608 "name": "Existed_Raid", 00:14:46.608 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:46.608 "strip_size_kb": 64, 00:14:46.608 "state": "configuring", 00:14:46.608 "raid_level": "raid5f", 00:14:46.608 "superblock": false, 00:14:46.608 "num_base_bdevs": 3, 00:14:46.608 "num_base_bdevs_discovered": 2, 00:14:46.608 "num_base_bdevs_operational": 3, 00:14:46.608 "base_bdevs_list": [ 00:14:46.608 { 00:14:46.608 "name": "BaseBdev1", 00:14:46.608 "uuid": "c5b8e1b9-1c85-45f0-8ecc-2bc7807839b7", 00:14:46.608 "is_configured": true, 00:14:46.608 "data_offset": 0, 00:14:46.608 "data_size": 65536 00:14:46.608 }, 00:14:46.608 { 00:14:46.608 "name": null, 00:14:46.608 "uuid": "3c467d69-a665-41d2-b11f-5a6e49c9499a", 00:14:46.608 "is_configured": false, 00:14:46.608 "data_offset": 0, 00:14:46.608 "data_size": 65536 00:14:46.608 }, 00:14:46.608 { 00:14:46.608 "name": "BaseBdev3", 00:14:46.608 "uuid": "9537c6e2-419e-48f0-be69-b4fa78e1733a", 00:14:46.608 "is_configured": true, 00:14:46.608 "data_offset": 0, 00:14:46.608 "data_size": 65536 00:14:46.608 } 00:14:46.608 ] 00:14:46.608 }' 00:14:46.608 15:22:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:46.608 15:22:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.866 15:22:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:46.866 15:22:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:46.866 15:22:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.867 15:22:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.867 15:22:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.867 15:22:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:14:46.867 15:22:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:46.867 15:22:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.867 15:22:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.867 [2024-11-20 15:22:33.295074] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:47.125 15:22:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.125 15:22:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:47.125 15:22:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:47.125 15:22:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:47.125 15:22:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:47.125 15:22:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:47.125 15:22:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:47.125 15:22:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:47.125 15:22:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:47.125 15:22:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:47.125 15:22:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:47.125 15:22:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:47.125 15:22:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:47.125 15:22:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.125 15:22:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.125 15:22:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.125 15:22:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:47.125 "name": "Existed_Raid", 00:14:47.125 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:47.125 "strip_size_kb": 64, 00:14:47.125 "state": "configuring", 00:14:47.125 "raid_level": "raid5f", 00:14:47.125 "superblock": false, 00:14:47.125 "num_base_bdevs": 3, 00:14:47.125 "num_base_bdevs_discovered": 1, 00:14:47.125 "num_base_bdevs_operational": 3, 00:14:47.125 "base_bdevs_list": [ 00:14:47.125 { 00:14:47.125 "name": null, 00:14:47.125 "uuid": "c5b8e1b9-1c85-45f0-8ecc-2bc7807839b7", 00:14:47.125 "is_configured": false, 00:14:47.125 "data_offset": 0, 00:14:47.125 "data_size": 65536 00:14:47.125 }, 00:14:47.125 { 00:14:47.125 "name": null, 00:14:47.125 "uuid": "3c467d69-a665-41d2-b11f-5a6e49c9499a", 00:14:47.125 "is_configured": false, 00:14:47.125 "data_offset": 0, 00:14:47.125 "data_size": 65536 00:14:47.125 }, 00:14:47.125 { 00:14:47.125 "name": "BaseBdev3", 00:14:47.125 "uuid": "9537c6e2-419e-48f0-be69-b4fa78e1733a", 00:14:47.125 "is_configured": true, 00:14:47.125 "data_offset": 0, 00:14:47.125 "data_size": 65536 00:14:47.125 } 00:14:47.125 ] 00:14:47.125 }' 00:14:47.125 15:22:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:47.125 15:22:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.384 15:22:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:47.384 15:22:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.384 15:22:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.384 15:22:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:47.384 15:22:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.645 15:22:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:14:47.645 15:22:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:14:47.645 15:22:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.645 15:22:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.645 [2024-11-20 15:22:33.885631] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:47.645 15:22:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.645 15:22:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:47.645 15:22:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:47.645 15:22:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:47.645 15:22:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:47.645 15:22:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:47.645 15:22:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:47.645 15:22:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:47.645 15:22:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:47.645 15:22:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:47.645 15:22:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:47.645 15:22:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:47.645 15:22:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:47.645 15:22:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.645 15:22:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.645 15:22:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.645 15:22:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:47.645 "name": "Existed_Raid", 00:14:47.645 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:47.645 "strip_size_kb": 64, 00:14:47.645 "state": "configuring", 00:14:47.645 "raid_level": "raid5f", 00:14:47.645 "superblock": false, 00:14:47.645 "num_base_bdevs": 3, 00:14:47.645 "num_base_bdevs_discovered": 2, 00:14:47.645 "num_base_bdevs_operational": 3, 00:14:47.645 "base_bdevs_list": [ 00:14:47.645 { 00:14:47.645 "name": null, 00:14:47.645 "uuid": "c5b8e1b9-1c85-45f0-8ecc-2bc7807839b7", 00:14:47.645 "is_configured": false, 00:14:47.645 "data_offset": 0, 00:14:47.645 "data_size": 65536 00:14:47.645 }, 00:14:47.645 { 00:14:47.645 "name": "BaseBdev2", 00:14:47.645 "uuid": "3c467d69-a665-41d2-b11f-5a6e49c9499a", 00:14:47.645 "is_configured": true, 00:14:47.645 "data_offset": 0, 00:14:47.645 "data_size": 65536 00:14:47.645 }, 00:14:47.645 { 00:14:47.645 "name": "BaseBdev3", 00:14:47.645 "uuid": "9537c6e2-419e-48f0-be69-b4fa78e1733a", 00:14:47.645 "is_configured": true, 00:14:47.645 "data_offset": 0, 00:14:47.645 "data_size": 65536 00:14:47.645 } 00:14:47.645 ] 00:14:47.645 }' 00:14:47.645 15:22:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:47.645 15:22:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.904 15:22:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:47.904 15:22:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:47.904 15:22:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.904 15:22:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.904 15:22:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.904 15:22:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:14:47.904 15:22:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:47.904 15:22:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.905 15:22:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.905 15:22:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:14:48.164 15:22:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.164 15:22:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u c5b8e1b9-1c85-45f0-8ecc-2bc7807839b7 00:14:48.164 15:22:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.164 15:22:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.164 [2024-11-20 15:22:34.462587] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:14:48.164 [2024-11-20 15:22:34.462679] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:14:48.164 [2024-11-20 15:22:34.462701] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:14:48.164 [2024-11-20 15:22:34.462982] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:14:48.164 [2024-11-20 15:22:34.468279] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:14:48.164 [2024-11-20 15:22:34.468311] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:14:48.164 [2024-11-20 15:22:34.468596] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:48.164 NewBaseBdev 00:14:48.164 15:22:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.164 15:22:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:14:48.164 15:22:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:14:48.164 15:22:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:48.164 15:22:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:48.164 15:22:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:48.164 15:22:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:48.164 15:22:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:48.164 15:22:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.164 15:22:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.164 15:22:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.164 15:22:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:14:48.164 15:22:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.164 15:22:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.164 [ 00:14:48.164 { 00:14:48.164 "name": "NewBaseBdev", 00:14:48.164 "aliases": [ 00:14:48.164 "c5b8e1b9-1c85-45f0-8ecc-2bc7807839b7" 00:14:48.164 ], 00:14:48.164 "product_name": "Malloc disk", 00:14:48.164 "block_size": 512, 00:14:48.164 "num_blocks": 65536, 00:14:48.164 "uuid": "c5b8e1b9-1c85-45f0-8ecc-2bc7807839b7", 00:14:48.164 "assigned_rate_limits": { 00:14:48.164 "rw_ios_per_sec": 0, 00:14:48.164 "rw_mbytes_per_sec": 0, 00:14:48.164 "r_mbytes_per_sec": 0, 00:14:48.164 "w_mbytes_per_sec": 0 00:14:48.164 }, 00:14:48.164 "claimed": true, 00:14:48.164 "claim_type": "exclusive_write", 00:14:48.164 "zoned": false, 00:14:48.164 "supported_io_types": { 00:14:48.164 "read": true, 00:14:48.164 "write": true, 00:14:48.164 "unmap": true, 00:14:48.164 "flush": true, 00:14:48.164 "reset": true, 00:14:48.164 "nvme_admin": false, 00:14:48.164 "nvme_io": false, 00:14:48.164 "nvme_io_md": false, 00:14:48.164 "write_zeroes": true, 00:14:48.164 "zcopy": true, 00:14:48.164 "get_zone_info": false, 00:14:48.164 "zone_management": false, 00:14:48.164 "zone_append": false, 00:14:48.164 "compare": false, 00:14:48.164 "compare_and_write": false, 00:14:48.164 "abort": true, 00:14:48.164 "seek_hole": false, 00:14:48.164 "seek_data": false, 00:14:48.164 "copy": true, 00:14:48.164 "nvme_iov_md": false 00:14:48.164 }, 00:14:48.164 "memory_domains": [ 00:14:48.164 { 00:14:48.164 "dma_device_id": "system", 00:14:48.164 "dma_device_type": 1 00:14:48.164 }, 00:14:48.164 { 00:14:48.164 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:48.164 "dma_device_type": 2 00:14:48.164 } 00:14:48.164 ], 00:14:48.164 "driver_specific": {} 00:14:48.164 } 00:14:48.164 ] 00:14:48.164 15:22:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.164 15:22:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:48.164 15:22:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:14:48.164 15:22:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:48.164 15:22:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:48.164 15:22:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:48.164 15:22:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:48.164 15:22:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:48.164 15:22:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:48.164 15:22:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:48.165 15:22:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:48.165 15:22:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:48.165 15:22:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:48.165 15:22:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:48.165 15:22:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.165 15:22:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.165 15:22:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.165 15:22:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:48.165 "name": "Existed_Raid", 00:14:48.165 "uuid": "4524c7e6-3bd9-4baf-be59-9c86a003aacf", 00:14:48.165 "strip_size_kb": 64, 00:14:48.165 "state": "online", 00:14:48.165 "raid_level": "raid5f", 00:14:48.165 "superblock": false, 00:14:48.165 "num_base_bdevs": 3, 00:14:48.165 "num_base_bdevs_discovered": 3, 00:14:48.165 "num_base_bdevs_operational": 3, 00:14:48.165 "base_bdevs_list": [ 00:14:48.165 { 00:14:48.165 "name": "NewBaseBdev", 00:14:48.165 "uuid": "c5b8e1b9-1c85-45f0-8ecc-2bc7807839b7", 00:14:48.165 "is_configured": true, 00:14:48.165 "data_offset": 0, 00:14:48.165 "data_size": 65536 00:14:48.165 }, 00:14:48.165 { 00:14:48.165 "name": "BaseBdev2", 00:14:48.165 "uuid": "3c467d69-a665-41d2-b11f-5a6e49c9499a", 00:14:48.165 "is_configured": true, 00:14:48.165 "data_offset": 0, 00:14:48.165 "data_size": 65536 00:14:48.165 }, 00:14:48.165 { 00:14:48.165 "name": "BaseBdev3", 00:14:48.165 "uuid": "9537c6e2-419e-48f0-be69-b4fa78e1733a", 00:14:48.165 "is_configured": true, 00:14:48.165 "data_offset": 0, 00:14:48.165 "data_size": 65536 00:14:48.165 } 00:14:48.165 ] 00:14:48.165 }' 00:14:48.165 15:22:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:48.165 15:22:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.733 15:22:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:14:48.733 15:22:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:48.733 15:22:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:48.733 15:22:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:48.733 15:22:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:48.733 15:22:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:48.733 15:22:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:48.734 15:22:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.734 15:22:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.734 15:22:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:48.734 [2024-11-20 15:22:34.958822] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:48.734 15:22:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.734 15:22:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:48.734 "name": "Existed_Raid", 00:14:48.734 "aliases": [ 00:14:48.734 "4524c7e6-3bd9-4baf-be59-9c86a003aacf" 00:14:48.734 ], 00:14:48.734 "product_name": "Raid Volume", 00:14:48.734 "block_size": 512, 00:14:48.734 "num_blocks": 131072, 00:14:48.734 "uuid": "4524c7e6-3bd9-4baf-be59-9c86a003aacf", 00:14:48.734 "assigned_rate_limits": { 00:14:48.734 "rw_ios_per_sec": 0, 00:14:48.734 "rw_mbytes_per_sec": 0, 00:14:48.734 "r_mbytes_per_sec": 0, 00:14:48.734 "w_mbytes_per_sec": 0 00:14:48.734 }, 00:14:48.734 "claimed": false, 00:14:48.734 "zoned": false, 00:14:48.734 "supported_io_types": { 00:14:48.734 "read": true, 00:14:48.734 "write": true, 00:14:48.734 "unmap": false, 00:14:48.734 "flush": false, 00:14:48.734 "reset": true, 00:14:48.734 "nvme_admin": false, 00:14:48.734 "nvme_io": false, 00:14:48.734 "nvme_io_md": false, 00:14:48.734 "write_zeroes": true, 00:14:48.734 "zcopy": false, 00:14:48.734 "get_zone_info": false, 00:14:48.734 "zone_management": false, 00:14:48.734 "zone_append": false, 00:14:48.734 "compare": false, 00:14:48.734 "compare_and_write": false, 00:14:48.734 "abort": false, 00:14:48.734 "seek_hole": false, 00:14:48.734 "seek_data": false, 00:14:48.734 "copy": false, 00:14:48.734 "nvme_iov_md": false 00:14:48.734 }, 00:14:48.734 "driver_specific": { 00:14:48.734 "raid": { 00:14:48.734 "uuid": "4524c7e6-3bd9-4baf-be59-9c86a003aacf", 00:14:48.734 "strip_size_kb": 64, 00:14:48.734 "state": "online", 00:14:48.734 "raid_level": "raid5f", 00:14:48.734 "superblock": false, 00:14:48.734 "num_base_bdevs": 3, 00:14:48.734 "num_base_bdevs_discovered": 3, 00:14:48.734 "num_base_bdevs_operational": 3, 00:14:48.734 "base_bdevs_list": [ 00:14:48.734 { 00:14:48.734 "name": "NewBaseBdev", 00:14:48.734 "uuid": "c5b8e1b9-1c85-45f0-8ecc-2bc7807839b7", 00:14:48.734 "is_configured": true, 00:14:48.734 "data_offset": 0, 00:14:48.734 "data_size": 65536 00:14:48.734 }, 00:14:48.734 { 00:14:48.734 "name": "BaseBdev2", 00:14:48.734 "uuid": "3c467d69-a665-41d2-b11f-5a6e49c9499a", 00:14:48.734 "is_configured": true, 00:14:48.734 "data_offset": 0, 00:14:48.734 "data_size": 65536 00:14:48.734 }, 00:14:48.734 { 00:14:48.734 "name": "BaseBdev3", 00:14:48.734 "uuid": "9537c6e2-419e-48f0-be69-b4fa78e1733a", 00:14:48.734 "is_configured": true, 00:14:48.734 "data_offset": 0, 00:14:48.734 "data_size": 65536 00:14:48.734 } 00:14:48.734 ] 00:14:48.734 } 00:14:48.734 } 00:14:48.734 }' 00:14:48.734 15:22:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:48.734 15:22:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:14:48.734 BaseBdev2 00:14:48.734 BaseBdev3' 00:14:48.734 15:22:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:48.734 15:22:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:48.734 15:22:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:48.734 15:22:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:14:48.734 15:22:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.734 15:22:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:48.734 15:22:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.734 15:22:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.734 15:22:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:48.734 15:22:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:48.734 15:22:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:48.734 15:22:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:48.734 15:22:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:48.734 15:22:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.734 15:22:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.734 15:22:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.734 15:22:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:48.734 15:22:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:48.734 15:22:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:48.734 15:22:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:48.734 15:22:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:48.734 15:22:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.734 15:22:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.734 15:22:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.734 15:22:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:48.734 15:22:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:48.734 15:22:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:48.734 15:22:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.734 15:22:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.734 [2024-11-20 15:22:35.194264] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:48.734 [2024-11-20 15:22:35.194301] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:48.734 [2024-11-20 15:22:35.194388] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:48.734 [2024-11-20 15:22:35.194695] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:48.734 [2024-11-20 15:22:35.194717] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:14:48.734 15:22:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.734 15:22:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 79708 00:14:48.734 15:22:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 79708 ']' 00:14:48.734 15:22:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # kill -0 79708 00:14:48.734 15:22:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # uname 00:14:48.734 15:22:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:48.734 15:22:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79708 00:14:48.994 killing process with pid 79708 00:14:48.994 15:22:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:48.994 15:22:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:48.994 15:22:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79708' 00:14:48.994 15:22:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@973 -- # kill 79708 00:14:48.994 [2024-11-20 15:22:35.239954] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:48.994 15:22:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@978 -- # wait 79708 00:14:49.254 [2024-11-20 15:22:35.548116] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:50.635 15:22:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:14:50.635 ************************************ 00:14:50.635 END TEST raid5f_state_function_test 00:14:50.635 ************************************ 00:14:50.635 00:14:50.635 real 0m10.737s 00:14:50.635 user 0m16.992s 00:14:50.635 sys 0m2.212s 00:14:50.635 15:22:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:50.635 15:22:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.635 15:22:36 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 3 true 00:14:50.635 15:22:36 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:14:50.635 15:22:36 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:50.635 15:22:36 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:50.635 ************************************ 00:14:50.635 START TEST raid5f_state_function_test_sb 00:14:50.635 ************************************ 00:14:50.635 15:22:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 3 true 00:14:50.635 15:22:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:14:50.635 15:22:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:14:50.635 15:22:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:14:50.635 15:22:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:14:50.635 15:22:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:14:50.635 15:22:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:50.636 15:22:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:14:50.636 15:22:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:50.636 15:22:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:50.636 15:22:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:14:50.636 15:22:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:50.636 15:22:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:50.636 15:22:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:14:50.636 15:22:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:50.636 15:22:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:50.636 15:22:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:14:50.636 15:22:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:14:50.636 15:22:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:14:50.636 15:22:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:14:50.636 15:22:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:14:50.636 15:22:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:14:50.636 15:22:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:14:50.636 15:22:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:14:50.636 15:22:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:14:50.636 15:22:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:14:50.636 15:22:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:14:50.636 Process raid pid: 80331 00:14:50.636 15:22:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=80331 00:14:50.636 15:22:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 80331' 00:14:50.636 15:22:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 80331 00:14:50.636 15:22:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 80331 ']' 00:14:50.636 15:22:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:50.636 15:22:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:14:50.636 15:22:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:50.636 15:22:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:50.636 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:50.636 15:22:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:50.636 15:22:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:50.636 [2024-11-20 15:22:36.875641] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:14:50.636 [2024-11-20 15:22:36.876521] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:50.636 [2024-11-20 15:22:37.059785] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:50.895 [2024-11-20 15:22:37.182673] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:51.274 [2024-11-20 15:22:37.395525] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:51.275 [2024-11-20 15:22:37.395571] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:51.567 15:22:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:51.567 15:22:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:14:51.567 15:22:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:51.567 15:22:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.567 15:22:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.567 [2024-11-20 15:22:37.720109] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:51.567 [2024-11-20 15:22:37.720168] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:51.567 [2024-11-20 15:22:37.720181] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:51.567 [2024-11-20 15:22:37.720193] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:51.567 [2024-11-20 15:22:37.720208] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:51.567 [2024-11-20 15:22:37.720220] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:51.567 15:22:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.567 15:22:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:51.567 15:22:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:51.567 15:22:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:51.567 15:22:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:51.567 15:22:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:51.567 15:22:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:51.567 15:22:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:51.567 15:22:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:51.567 15:22:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:51.567 15:22:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:51.567 15:22:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:51.567 15:22:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:51.567 15:22:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.567 15:22:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.567 15:22:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.567 15:22:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:51.567 "name": "Existed_Raid", 00:14:51.567 "uuid": "b9aa7d0e-fc8f-46fb-b9fd-86cfd49b02e1", 00:14:51.567 "strip_size_kb": 64, 00:14:51.567 "state": "configuring", 00:14:51.567 "raid_level": "raid5f", 00:14:51.567 "superblock": true, 00:14:51.567 "num_base_bdevs": 3, 00:14:51.567 "num_base_bdevs_discovered": 0, 00:14:51.567 "num_base_bdevs_operational": 3, 00:14:51.567 "base_bdevs_list": [ 00:14:51.567 { 00:14:51.567 "name": "BaseBdev1", 00:14:51.567 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:51.567 "is_configured": false, 00:14:51.567 "data_offset": 0, 00:14:51.567 "data_size": 0 00:14:51.567 }, 00:14:51.567 { 00:14:51.567 "name": "BaseBdev2", 00:14:51.567 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:51.567 "is_configured": false, 00:14:51.567 "data_offset": 0, 00:14:51.567 "data_size": 0 00:14:51.567 }, 00:14:51.567 { 00:14:51.567 "name": "BaseBdev3", 00:14:51.567 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:51.567 "is_configured": false, 00:14:51.567 "data_offset": 0, 00:14:51.567 "data_size": 0 00:14:51.567 } 00:14:51.567 ] 00:14:51.567 }' 00:14:51.567 15:22:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:51.567 15:22:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.827 15:22:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:51.827 15:22:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.827 15:22:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.827 [2024-11-20 15:22:38.079621] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:51.827 [2024-11-20 15:22:38.079663] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:14:51.827 15:22:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.827 15:22:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:51.827 15:22:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.827 15:22:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.827 [2024-11-20 15:22:38.087616] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:51.827 [2024-11-20 15:22:38.087683] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:51.827 [2024-11-20 15:22:38.087694] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:51.827 [2024-11-20 15:22:38.087723] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:51.827 [2024-11-20 15:22:38.087731] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:51.827 [2024-11-20 15:22:38.087744] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:51.827 15:22:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.827 15:22:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:51.827 15:22:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.827 15:22:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.827 [2024-11-20 15:22:38.133809] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:51.827 BaseBdev1 00:14:51.827 15:22:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.827 15:22:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:14:51.827 15:22:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:14:51.827 15:22:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:51.827 15:22:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:51.827 15:22:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:51.827 15:22:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:51.827 15:22:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:51.827 15:22:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.827 15:22:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.827 15:22:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.827 15:22:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:51.828 15:22:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.828 15:22:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.828 [ 00:14:51.828 { 00:14:51.828 "name": "BaseBdev1", 00:14:51.828 "aliases": [ 00:14:51.828 "78a66c85-6139-45d4-99fd-ea8a98cc054a" 00:14:51.828 ], 00:14:51.828 "product_name": "Malloc disk", 00:14:51.828 "block_size": 512, 00:14:51.828 "num_blocks": 65536, 00:14:51.828 "uuid": "78a66c85-6139-45d4-99fd-ea8a98cc054a", 00:14:51.828 "assigned_rate_limits": { 00:14:51.828 "rw_ios_per_sec": 0, 00:14:51.828 "rw_mbytes_per_sec": 0, 00:14:51.828 "r_mbytes_per_sec": 0, 00:14:51.828 "w_mbytes_per_sec": 0 00:14:51.828 }, 00:14:51.828 "claimed": true, 00:14:51.828 "claim_type": "exclusive_write", 00:14:51.828 "zoned": false, 00:14:51.828 "supported_io_types": { 00:14:51.828 "read": true, 00:14:51.828 "write": true, 00:14:51.828 "unmap": true, 00:14:51.828 "flush": true, 00:14:51.828 "reset": true, 00:14:51.828 "nvme_admin": false, 00:14:51.828 "nvme_io": false, 00:14:51.828 "nvme_io_md": false, 00:14:51.828 "write_zeroes": true, 00:14:51.828 "zcopy": true, 00:14:51.828 "get_zone_info": false, 00:14:51.828 "zone_management": false, 00:14:51.828 "zone_append": false, 00:14:51.828 "compare": false, 00:14:51.828 "compare_and_write": false, 00:14:51.828 "abort": true, 00:14:51.828 "seek_hole": false, 00:14:51.828 "seek_data": false, 00:14:51.828 "copy": true, 00:14:51.828 "nvme_iov_md": false 00:14:51.828 }, 00:14:51.828 "memory_domains": [ 00:14:51.828 { 00:14:51.828 "dma_device_id": "system", 00:14:51.828 "dma_device_type": 1 00:14:51.828 }, 00:14:51.828 { 00:14:51.828 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:51.828 "dma_device_type": 2 00:14:51.828 } 00:14:51.828 ], 00:14:51.828 "driver_specific": {} 00:14:51.828 } 00:14:51.828 ] 00:14:51.828 15:22:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.828 15:22:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:51.828 15:22:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:51.828 15:22:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:51.828 15:22:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:51.828 15:22:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:51.828 15:22:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:51.828 15:22:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:51.828 15:22:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:51.828 15:22:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:51.828 15:22:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:51.828 15:22:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:51.828 15:22:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:51.828 15:22:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:51.828 15:22:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.828 15:22:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.828 15:22:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.828 15:22:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:51.828 "name": "Existed_Raid", 00:14:51.828 "uuid": "32f1b826-fade-4640-840a-9ccc9bac613d", 00:14:51.828 "strip_size_kb": 64, 00:14:51.828 "state": "configuring", 00:14:51.828 "raid_level": "raid5f", 00:14:51.828 "superblock": true, 00:14:51.828 "num_base_bdevs": 3, 00:14:51.828 "num_base_bdevs_discovered": 1, 00:14:51.828 "num_base_bdevs_operational": 3, 00:14:51.828 "base_bdevs_list": [ 00:14:51.828 { 00:14:51.828 "name": "BaseBdev1", 00:14:51.828 "uuid": "78a66c85-6139-45d4-99fd-ea8a98cc054a", 00:14:51.828 "is_configured": true, 00:14:51.828 "data_offset": 2048, 00:14:51.828 "data_size": 63488 00:14:51.828 }, 00:14:51.828 { 00:14:51.828 "name": "BaseBdev2", 00:14:51.828 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:51.828 "is_configured": false, 00:14:51.828 "data_offset": 0, 00:14:51.828 "data_size": 0 00:14:51.828 }, 00:14:51.828 { 00:14:51.828 "name": "BaseBdev3", 00:14:51.828 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:51.828 "is_configured": false, 00:14:51.828 "data_offset": 0, 00:14:51.828 "data_size": 0 00:14:51.828 } 00:14:51.828 ] 00:14:51.828 }' 00:14:51.828 15:22:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:51.828 15:22:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.397 15:22:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:52.397 15:22:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.397 15:22:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.397 [2024-11-20 15:22:38.597372] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:52.397 [2024-11-20 15:22:38.597432] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:14:52.397 15:22:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.397 15:22:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:52.397 15:22:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.397 15:22:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.397 [2024-11-20 15:22:38.609450] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:52.398 [2024-11-20 15:22:38.611665] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:52.398 [2024-11-20 15:22:38.611736] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:52.398 [2024-11-20 15:22:38.611749] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:52.398 [2024-11-20 15:22:38.611762] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:52.398 15:22:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.398 15:22:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:14:52.398 15:22:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:52.398 15:22:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:52.398 15:22:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:52.398 15:22:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:52.398 15:22:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:52.398 15:22:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:52.398 15:22:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:52.398 15:22:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:52.398 15:22:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:52.398 15:22:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:52.398 15:22:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:52.398 15:22:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:52.398 15:22:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.398 15:22:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.398 15:22:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:52.398 15:22:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.398 15:22:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:52.398 "name": "Existed_Raid", 00:14:52.398 "uuid": "fe5ab156-2175-4f14-8900-3b8692a717a9", 00:14:52.398 "strip_size_kb": 64, 00:14:52.398 "state": "configuring", 00:14:52.398 "raid_level": "raid5f", 00:14:52.398 "superblock": true, 00:14:52.398 "num_base_bdevs": 3, 00:14:52.398 "num_base_bdevs_discovered": 1, 00:14:52.398 "num_base_bdevs_operational": 3, 00:14:52.398 "base_bdevs_list": [ 00:14:52.398 { 00:14:52.398 "name": "BaseBdev1", 00:14:52.398 "uuid": "78a66c85-6139-45d4-99fd-ea8a98cc054a", 00:14:52.398 "is_configured": true, 00:14:52.398 "data_offset": 2048, 00:14:52.398 "data_size": 63488 00:14:52.398 }, 00:14:52.398 { 00:14:52.398 "name": "BaseBdev2", 00:14:52.398 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:52.398 "is_configured": false, 00:14:52.398 "data_offset": 0, 00:14:52.398 "data_size": 0 00:14:52.398 }, 00:14:52.398 { 00:14:52.398 "name": "BaseBdev3", 00:14:52.398 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:52.398 "is_configured": false, 00:14:52.398 "data_offset": 0, 00:14:52.398 "data_size": 0 00:14:52.398 } 00:14:52.398 ] 00:14:52.398 }' 00:14:52.398 15:22:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:52.398 15:22:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.657 15:22:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:52.657 15:22:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.657 15:22:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.658 [2024-11-20 15:22:39.049641] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:52.658 BaseBdev2 00:14:52.658 15:22:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.658 15:22:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:14:52.658 15:22:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:14:52.658 15:22:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:52.658 15:22:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:52.658 15:22:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:52.658 15:22:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:52.658 15:22:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:52.658 15:22:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.658 15:22:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.658 15:22:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.658 15:22:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:52.658 15:22:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.658 15:22:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.658 [ 00:14:52.658 { 00:14:52.658 "name": "BaseBdev2", 00:14:52.658 "aliases": [ 00:14:52.658 "1b881399-e359-4ece-9d67-29e7b647fd46" 00:14:52.658 ], 00:14:52.658 "product_name": "Malloc disk", 00:14:52.658 "block_size": 512, 00:14:52.658 "num_blocks": 65536, 00:14:52.658 "uuid": "1b881399-e359-4ece-9d67-29e7b647fd46", 00:14:52.658 "assigned_rate_limits": { 00:14:52.658 "rw_ios_per_sec": 0, 00:14:52.658 "rw_mbytes_per_sec": 0, 00:14:52.658 "r_mbytes_per_sec": 0, 00:14:52.658 "w_mbytes_per_sec": 0 00:14:52.658 }, 00:14:52.658 "claimed": true, 00:14:52.658 "claim_type": "exclusive_write", 00:14:52.658 "zoned": false, 00:14:52.658 "supported_io_types": { 00:14:52.658 "read": true, 00:14:52.658 "write": true, 00:14:52.658 "unmap": true, 00:14:52.658 "flush": true, 00:14:52.658 "reset": true, 00:14:52.658 "nvme_admin": false, 00:14:52.658 "nvme_io": false, 00:14:52.658 "nvme_io_md": false, 00:14:52.658 "write_zeroes": true, 00:14:52.658 "zcopy": true, 00:14:52.658 "get_zone_info": false, 00:14:52.658 "zone_management": false, 00:14:52.658 "zone_append": false, 00:14:52.658 "compare": false, 00:14:52.658 "compare_and_write": false, 00:14:52.658 "abort": true, 00:14:52.658 "seek_hole": false, 00:14:52.658 "seek_data": false, 00:14:52.658 "copy": true, 00:14:52.658 "nvme_iov_md": false 00:14:52.658 }, 00:14:52.658 "memory_domains": [ 00:14:52.658 { 00:14:52.658 "dma_device_id": "system", 00:14:52.658 "dma_device_type": 1 00:14:52.658 }, 00:14:52.658 { 00:14:52.658 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:52.658 "dma_device_type": 2 00:14:52.658 } 00:14:52.658 ], 00:14:52.658 "driver_specific": {} 00:14:52.658 } 00:14:52.658 ] 00:14:52.658 15:22:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.658 15:22:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:52.658 15:22:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:52.658 15:22:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:52.658 15:22:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:52.658 15:22:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:52.658 15:22:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:52.658 15:22:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:52.658 15:22:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:52.658 15:22:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:52.658 15:22:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:52.658 15:22:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:52.658 15:22:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:52.658 15:22:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:52.658 15:22:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:52.658 15:22:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:52.658 15:22:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.658 15:22:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.658 15:22:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.658 15:22:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:52.658 "name": "Existed_Raid", 00:14:52.658 "uuid": "fe5ab156-2175-4f14-8900-3b8692a717a9", 00:14:52.658 "strip_size_kb": 64, 00:14:52.658 "state": "configuring", 00:14:52.658 "raid_level": "raid5f", 00:14:52.658 "superblock": true, 00:14:52.658 "num_base_bdevs": 3, 00:14:52.658 "num_base_bdevs_discovered": 2, 00:14:52.658 "num_base_bdevs_operational": 3, 00:14:52.658 "base_bdevs_list": [ 00:14:52.658 { 00:14:52.658 "name": "BaseBdev1", 00:14:52.658 "uuid": "78a66c85-6139-45d4-99fd-ea8a98cc054a", 00:14:52.658 "is_configured": true, 00:14:52.658 "data_offset": 2048, 00:14:52.658 "data_size": 63488 00:14:52.658 }, 00:14:52.658 { 00:14:52.658 "name": "BaseBdev2", 00:14:52.658 "uuid": "1b881399-e359-4ece-9d67-29e7b647fd46", 00:14:52.658 "is_configured": true, 00:14:52.658 "data_offset": 2048, 00:14:52.658 "data_size": 63488 00:14:52.658 }, 00:14:52.658 { 00:14:52.658 "name": "BaseBdev3", 00:14:52.658 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:52.658 "is_configured": false, 00:14:52.658 "data_offset": 0, 00:14:52.658 "data_size": 0 00:14:52.658 } 00:14:52.658 ] 00:14:52.658 }' 00:14:52.916 15:22:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:52.916 15:22:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.175 15:22:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:53.175 15:22:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.175 15:22:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.175 [2024-11-20 15:22:39.583199] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:53.175 [2024-11-20 15:22:39.583687] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:53.175 [2024-11-20 15:22:39.583717] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:53.175 [2024-11-20 15:22:39.584007] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:14:53.175 BaseBdev3 00:14:53.175 15:22:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.175 15:22:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:14:53.175 15:22:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:14:53.175 15:22:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:53.175 15:22:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:53.175 15:22:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:53.175 15:22:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:53.175 15:22:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:53.175 15:22:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.175 15:22:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.175 [2024-11-20 15:22:39.590066] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:53.175 [2024-11-20 15:22:39.590215] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:14:53.175 [2024-11-20 15:22:39.590511] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:53.175 15:22:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.175 15:22:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:53.175 15:22:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.175 15:22:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.175 [ 00:14:53.175 { 00:14:53.175 "name": "BaseBdev3", 00:14:53.175 "aliases": [ 00:14:53.175 "305e9178-ca6b-4e3e-94e4-394c1fb5d4fc" 00:14:53.175 ], 00:14:53.175 "product_name": "Malloc disk", 00:14:53.175 "block_size": 512, 00:14:53.175 "num_blocks": 65536, 00:14:53.175 "uuid": "305e9178-ca6b-4e3e-94e4-394c1fb5d4fc", 00:14:53.175 "assigned_rate_limits": { 00:14:53.175 "rw_ios_per_sec": 0, 00:14:53.175 "rw_mbytes_per_sec": 0, 00:14:53.175 "r_mbytes_per_sec": 0, 00:14:53.175 "w_mbytes_per_sec": 0 00:14:53.175 }, 00:14:53.175 "claimed": true, 00:14:53.175 "claim_type": "exclusive_write", 00:14:53.175 "zoned": false, 00:14:53.175 "supported_io_types": { 00:14:53.175 "read": true, 00:14:53.175 "write": true, 00:14:53.175 "unmap": true, 00:14:53.175 "flush": true, 00:14:53.175 "reset": true, 00:14:53.175 "nvme_admin": false, 00:14:53.175 "nvme_io": false, 00:14:53.175 "nvme_io_md": false, 00:14:53.175 "write_zeroes": true, 00:14:53.175 "zcopy": true, 00:14:53.175 "get_zone_info": false, 00:14:53.175 "zone_management": false, 00:14:53.175 "zone_append": false, 00:14:53.175 "compare": false, 00:14:53.175 "compare_and_write": false, 00:14:53.175 "abort": true, 00:14:53.175 "seek_hole": false, 00:14:53.175 "seek_data": false, 00:14:53.175 "copy": true, 00:14:53.175 "nvme_iov_md": false 00:14:53.175 }, 00:14:53.175 "memory_domains": [ 00:14:53.175 { 00:14:53.175 "dma_device_id": "system", 00:14:53.175 "dma_device_type": 1 00:14:53.175 }, 00:14:53.175 { 00:14:53.175 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:53.175 "dma_device_type": 2 00:14:53.175 } 00:14:53.175 ], 00:14:53.175 "driver_specific": {} 00:14:53.175 } 00:14:53.175 ] 00:14:53.175 15:22:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.175 15:22:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:53.175 15:22:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:53.175 15:22:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:53.175 15:22:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:14:53.175 15:22:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:53.175 15:22:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:53.175 15:22:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:53.176 15:22:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:53.176 15:22:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:53.176 15:22:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:53.176 15:22:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:53.176 15:22:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:53.176 15:22:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:53.176 15:22:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:53.176 15:22:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.176 15:22:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:53.176 15:22:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.435 15:22:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.435 15:22:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:53.435 "name": "Existed_Raid", 00:14:53.435 "uuid": "fe5ab156-2175-4f14-8900-3b8692a717a9", 00:14:53.435 "strip_size_kb": 64, 00:14:53.435 "state": "online", 00:14:53.435 "raid_level": "raid5f", 00:14:53.435 "superblock": true, 00:14:53.435 "num_base_bdevs": 3, 00:14:53.435 "num_base_bdevs_discovered": 3, 00:14:53.435 "num_base_bdevs_operational": 3, 00:14:53.435 "base_bdevs_list": [ 00:14:53.435 { 00:14:53.435 "name": "BaseBdev1", 00:14:53.435 "uuid": "78a66c85-6139-45d4-99fd-ea8a98cc054a", 00:14:53.435 "is_configured": true, 00:14:53.435 "data_offset": 2048, 00:14:53.435 "data_size": 63488 00:14:53.435 }, 00:14:53.435 { 00:14:53.435 "name": "BaseBdev2", 00:14:53.435 "uuid": "1b881399-e359-4ece-9d67-29e7b647fd46", 00:14:53.435 "is_configured": true, 00:14:53.435 "data_offset": 2048, 00:14:53.435 "data_size": 63488 00:14:53.435 }, 00:14:53.435 { 00:14:53.435 "name": "BaseBdev3", 00:14:53.435 "uuid": "305e9178-ca6b-4e3e-94e4-394c1fb5d4fc", 00:14:53.435 "is_configured": true, 00:14:53.435 "data_offset": 2048, 00:14:53.435 "data_size": 63488 00:14:53.435 } 00:14:53.435 ] 00:14:53.435 }' 00:14:53.435 15:22:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:53.435 15:22:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.694 15:22:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:14:53.694 15:22:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:53.694 15:22:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:53.694 15:22:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:53.694 15:22:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:14:53.694 15:22:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:53.694 15:22:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:53.694 15:22:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:53.694 15:22:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.694 15:22:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.694 [2024-11-20 15:22:40.036784] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:53.694 15:22:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.694 15:22:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:53.694 "name": "Existed_Raid", 00:14:53.694 "aliases": [ 00:14:53.694 "fe5ab156-2175-4f14-8900-3b8692a717a9" 00:14:53.694 ], 00:14:53.694 "product_name": "Raid Volume", 00:14:53.694 "block_size": 512, 00:14:53.694 "num_blocks": 126976, 00:14:53.694 "uuid": "fe5ab156-2175-4f14-8900-3b8692a717a9", 00:14:53.694 "assigned_rate_limits": { 00:14:53.694 "rw_ios_per_sec": 0, 00:14:53.694 "rw_mbytes_per_sec": 0, 00:14:53.694 "r_mbytes_per_sec": 0, 00:14:53.694 "w_mbytes_per_sec": 0 00:14:53.694 }, 00:14:53.694 "claimed": false, 00:14:53.694 "zoned": false, 00:14:53.694 "supported_io_types": { 00:14:53.694 "read": true, 00:14:53.694 "write": true, 00:14:53.694 "unmap": false, 00:14:53.694 "flush": false, 00:14:53.694 "reset": true, 00:14:53.694 "nvme_admin": false, 00:14:53.694 "nvme_io": false, 00:14:53.694 "nvme_io_md": false, 00:14:53.694 "write_zeroes": true, 00:14:53.694 "zcopy": false, 00:14:53.694 "get_zone_info": false, 00:14:53.694 "zone_management": false, 00:14:53.694 "zone_append": false, 00:14:53.694 "compare": false, 00:14:53.694 "compare_and_write": false, 00:14:53.694 "abort": false, 00:14:53.694 "seek_hole": false, 00:14:53.694 "seek_data": false, 00:14:53.694 "copy": false, 00:14:53.694 "nvme_iov_md": false 00:14:53.694 }, 00:14:53.694 "driver_specific": { 00:14:53.694 "raid": { 00:14:53.694 "uuid": "fe5ab156-2175-4f14-8900-3b8692a717a9", 00:14:53.694 "strip_size_kb": 64, 00:14:53.694 "state": "online", 00:14:53.694 "raid_level": "raid5f", 00:14:53.694 "superblock": true, 00:14:53.694 "num_base_bdevs": 3, 00:14:53.694 "num_base_bdevs_discovered": 3, 00:14:53.694 "num_base_bdevs_operational": 3, 00:14:53.694 "base_bdevs_list": [ 00:14:53.694 { 00:14:53.694 "name": "BaseBdev1", 00:14:53.694 "uuid": "78a66c85-6139-45d4-99fd-ea8a98cc054a", 00:14:53.694 "is_configured": true, 00:14:53.694 "data_offset": 2048, 00:14:53.694 "data_size": 63488 00:14:53.694 }, 00:14:53.694 { 00:14:53.694 "name": "BaseBdev2", 00:14:53.694 "uuid": "1b881399-e359-4ece-9d67-29e7b647fd46", 00:14:53.694 "is_configured": true, 00:14:53.694 "data_offset": 2048, 00:14:53.694 "data_size": 63488 00:14:53.694 }, 00:14:53.694 { 00:14:53.694 "name": "BaseBdev3", 00:14:53.694 "uuid": "305e9178-ca6b-4e3e-94e4-394c1fb5d4fc", 00:14:53.694 "is_configured": true, 00:14:53.694 "data_offset": 2048, 00:14:53.694 "data_size": 63488 00:14:53.694 } 00:14:53.694 ] 00:14:53.694 } 00:14:53.694 } 00:14:53.694 }' 00:14:53.694 15:22:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:53.694 15:22:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:14:53.694 BaseBdev2 00:14:53.694 BaseBdev3' 00:14:53.694 15:22:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:53.694 15:22:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:53.694 15:22:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:53.694 15:22:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:53.694 15:22:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:14:53.694 15:22:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.694 15:22:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.954 15:22:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.954 15:22:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:53.954 15:22:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:53.954 15:22:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:53.954 15:22:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:53.954 15:22:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:53.954 15:22:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.954 15:22:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.954 15:22:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.954 15:22:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:53.954 15:22:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:53.954 15:22:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:53.954 15:22:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:53.954 15:22:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:53.954 15:22:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.954 15:22:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.954 15:22:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.954 15:22:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:53.954 15:22:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:53.954 15:22:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:53.954 15:22:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.954 15:22:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.954 [2024-11-20 15:22:40.340195] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:54.213 15:22:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.213 15:22:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:14:54.214 15:22:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:14:54.214 15:22:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:54.214 15:22:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:14:54.214 15:22:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:14:54.214 15:22:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:14:54.214 15:22:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:54.214 15:22:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:54.214 15:22:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:54.214 15:22:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:54.214 15:22:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:54.214 15:22:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:54.214 15:22:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:54.214 15:22:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:54.214 15:22:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:54.214 15:22:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:54.214 15:22:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:54.214 15:22:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.214 15:22:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.214 15:22:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.214 15:22:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:54.214 "name": "Existed_Raid", 00:14:54.214 "uuid": "fe5ab156-2175-4f14-8900-3b8692a717a9", 00:14:54.214 "strip_size_kb": 64, 00:14:54.214 "state": "online", 00:14:54.214 "raid_level": "raid5f", 00:14:54.214 "superblock": true, 00:14:54.214 "num_base_bdevs": 3, 00:14:54.214 "num_base_bdevs_discovered": 2, 00:14:54.214 "num_base_bdevs_operational": 2, 00:14:54.214 "base_bdevs_list": [ 00:14:54.214 { 00:14:54.214 "name": null, 00:14:54.214 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:54.214 "is_configured": false, 00:14:54.214 "data_offset": 0, 00:14:54.214 "data_size": 63488 00:14:54.214 }, 00:14:54.214 { 00:14:54.214 "name": "BaseBdev2", 00:14:54.214 "uuid": "1b881399-e359-4ece-9d67-29e7b647fd46", 00:14:54.214 "is_configured": true, 00:14:54.214 "data_offset": 2048, 00:14:54.214 "data_size": 63488 00:14:54.214 }, 00:14:54.214 { 00:14:54.214 "name": "BaseBdev3", 00:14:54.214 "uuid": "305e9178-ca6b-4e3e-94e4-394c1fb5d4fc", 00:14:54.214 "is_configured": true, 00:14:54.214 "data_offset": 2048, 00:14:54.214 "data_size": 63488 00:14:54.214 } 00:14:54.214 ] 00:14:54.214 }' 00:14:54.214 15:22:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:54.214 15:22:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.473 15:22:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:14:54.473 15:22:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:54.473 15:22:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:54.473 15:22:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:54.473 15:22:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.473 15:22:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.473 15:22:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.473 15:22:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:54.473 15:22:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:54.473 15:22:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:14:54.473 15:22:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.473 15:22:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.473 [2024-11-20 15:22:40.934004] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:54.473 [2024-11-20 15:22:40.934150] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:54.733 [2024-11-20 15:22:41.031287] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:54.733 15:22:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.733 15:22:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:54.733 15:22:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:54.733 15:22:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:54.733 15:22:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:54.733 15:22:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.733 15:22:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.733 15:22:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.733 15:22:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:54.733 15:22:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:54.733 15:22:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:14:54.733 15:22:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.733 15:22:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.733 [2024-11-20 15:22:41.087277] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:54.733 [2024-11-20 15:22:41.087337] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:14:54.733 15:22:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.733 15:22:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:54.733 15:22:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:54.733 15:22:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:54.733 15:22:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:14:54.733 15:22:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.733 15:22:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.733 15:22:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.994 15:22:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:14:54.994 15:22:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:14:54.994 15:22:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:14:54.994 15:22:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:14:54.994 15:22:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:54.994 15:22:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:54.994 15:22:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.994 15:22:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.994 BaseBdev2 00:14:54.994 15:22:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.994 15:22:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:14:54.994 15:22:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:14:54.994 15:22:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:54.994 15:22:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:54.994 15:22:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:54.994 15:22:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:54.994 15:22:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:54.994 15:22:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.994 15:22:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.994 15:22:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.994 15:22:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:54.994 15:22:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.994 15:22:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.994 [ 00:14:54.994 { 00:14:54.994 "name": "BaseBdev2", 00:14:54.994 "aliases": [ 00:14:54.994 "b307d859-c93c-4577-bd43-9e1610e11789" 00:14:54.994 ], 00:14:54.994 "product_name": "Malloc disk", 00:14:54.994 "block_size": 512, 00:14:54.994 "num_blocks": 65536, 00:14:54.994 "uuid": "b307d859-c93c-4577-bd43-9e1610e11789", 00:14:54.994 "assigned_rate_limits": { 00:14:54.994 "rw_ios_per_sec": 0, 00:14:54.994 "rw_mbytes_per_sec": 0, 00:14:54.994 "r_mbytes_per_sec": 0, 00:14:54.994 "w_mbytes_per_sec": 0 00:14:54.994 }, 00:14:54.994 "claimed": false, 00:14:54.994 "zoned": false, 00:14:54.994 "supported_io_types": { 00:14:54.994 "read": true, 00:14:54.994 "write": true, 00:14:54.994 "unmap": true, 00:14:54.994 "flush": true, 00:14:54.994 "reset": true, 00:14:54.994 "nvme_admin": false, 00:14:54.994 "nvme_io": false, 00:14:54.994 "nvme_io_md": false, 00:14:54.994 "write_zeroes": true, 00:14:54.994 "zcopy": true, 00:14:54.994 "get_zone_info": false, 00:14:54.994 "zone_management": false, 00:14:54.994 "zone_append": false, 00:14:54.994 "compare": false, 00:14:54.994 "compare_and_write": false, 00:14:54.994 "abort": true, 00:14:54.994 "seek_hole": false, 00:14:54.994 "seek_data": false, 00:14:54.994 "copy": true, 00:14:54.994 "nvme_iov_md": false 00:14:54.994 }, 00:14:54.994 "memory_domains": [ 00:14:54.994 { 00:14:54.994 "dma_device_id": "system", 00:14:54.994 "dma_device_type": 1 00:14:54.994 }, 00:14:54.994 { 00:14:54.994 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:54.994 "dma_device_type": 2 00:14:54.994 } 00:14:54.994 ], 00:14:54.994 "driver_specific": {} 00:14:54.994 } 00:14:54.994 ] 00:14:54.994 15:22:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.994 15:22:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:54.994 15:22:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:54.994 15:22:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:54.994 15:22:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:54.994 15:22:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.994 15:22:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.994 BaseBdev3 00:14:54.994 15:22:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.994 15:22:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:14:54.994 15:22:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:14:54.994 15:22:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:54.994 15:22:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:54.994 15:22:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:54.994 15:22:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:54.994 15:22:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:54.994 15:22:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.994 15:22:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.994 15:22:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.994 15:22:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:54.994 15:22:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.994 15:22:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.994 [ 00:14:54.994 { 00:14:54.994 "name": "BaseBdev3", 00:14:54.994 "aliases": [ 00:14:54.994 "6bea71a2-f498-4d5b-8fe8-1bcf1bbf6827" 00:14:54.994 ], 00:14:54.994 "product_name": "Malloc disk", 00:14:54.994 "block_size": 512, 00:14:54.994 "num_blocks": 65536, 00:14:54.994 "uuid": "6bea71a2-f498-4d5b-8fe8-1bcf1bbf6827", 00:14:54.994 "assigned_rate_limits": { 00:14:54.994 "rw_ios_per_sec": 0, 00:14:54.994 "rw_mbytes_per_sec": 0, 00:14:54.994 "r_mbytes_per_sec": 0, 00:14:54.994 "w_mbytes_per_sec": 0 00:14:54.994 }, 00:14:54.994 "claimed": false, 00:14:54.994 "zoned": false, 00:14:54.994 "supported_io_types": { 00:14:54.994 "read": true, 00:14:54.994 "write": true, 00:14:54.994 "unmap": true, 00:14:54.994 "flush": true, 00:14:54.994 "reset": true, 00:14:54.994 "nvme_admin": false, 00:14:54.994 "nvme_io": false, 00:14:54.994 "nvme_io_md": false, 00:14:54.994 "write_zeroes": true, 00:14:54.994 "zcopy": true, 00:14:54.994 "get_zone_info": false, 00:14:54.994 "zone_management": false, 00:14:54.994 "zone_append": false, 00:14:54.994 "compare": false, 00:14:54.994 "compare_and_write": false, 00:14:54.994 "abort": true, 00:14:54.994 "seek_hole": false, 00:14:54.994 "seek_data": false, 00:14:54.994 "copy": true, 00:14:54.994 "nvme_iov_md": false 00:14:54.994 }, 00:14:54.994 "memory_domains": [ 00:14:54.994 { 00:14:54.994 "dma_device_id": "system", 00:14:54.994 "dma_device_type": 1 00:14:54.994 }, 00:14:54.994 { 00:14:54.994 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:54.994 "dma_device_type": 2 00:14:54.994 } 00:14:54.994 ], 00:14:54.994 "driver_specific": {} 00:14:54.994 } 00:14:54.994 ] 00:14:54.994 15:22:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.994 15:22:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:54.994 15:22:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:54.994 15:22:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:54.994 15:22:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:54.994 15:22:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.994 15:22:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.994 [2024-11-20 15:22:41.423855] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:54.994 [2024-11-20 15:22:41.423918] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:54.994 [2024-11-20 15:22:41.423947] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:54.994 [2024-11-20 15:22:41.426126] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:54.995 15:22:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.995 15:22:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:54.995 15:22:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:54.995 15:22:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:54.995 15:22:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:54.995 15:22:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:54.995 15:22:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:54.995 15:22:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:54.995 15:22:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:54.995 15:22:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:54.995 15:22:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:54.995 15:22:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:54.995 15:22:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:54.995 15:22:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.995 15:22:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.995 15:22:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.254 15:22:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:55.254 "name": "Existed_Raid", 00:14:55.254 "uuid": "a9b20e22-8483-42ca-9eae-4506f06da79a", 00:14:55.254 "strip_size_kb": 64, 00:14:55.254 "state": "configuring", 00:14:55.254 "raid_level": "raid5f", 00:14:55.254 "superblock": true, 00:14:55.254 "num_base_bdevs": 3, 00:14:55.254 "num_base_bdevs_discovered": 2, 00:14:55.254 "num_base_bdevs_operational": 3, 00:14:55.254 "base_bdevs_list": [ 00:14:55.254 { 00:14:55.254 "name": "BaseBdev1", 00:14:55.254 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:55.254 "is_configured": false, 00:14:55.254 "data_offset": 0, 00:14:55.254 "data_size": 0 00:14:55.254 }, 00:14:55.254 { 00:14:55.254 "name": "BaseBdev2", 00:14:55.254 "uuid": "b307d859-c93c-4577-bd43-9e1610e11789", 00:14:55.254 "is_configured": true, 00:14:55.254 "data_offset": 2048, 00:14:55.254 "data_size": 63488 00:14:55.254 }, 00:14:55.254 { 00:14:55.254 "name": "BaseBdev3", 00:14:55.254 "uuid": "6bea71a2-f498-4d5b-8fe8-1bcf1bbf6827", 00:14:55.254 "is_configured": true, 00:14:55.254 "data_offset": 2048, 00:14:55.254 "data_size": 63488 00:14:55.254 } 00:14:55.254 ] 00:14:55.254 }' 00:14:55.254 15:22:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:55.254 15:22:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:55.514 15:22:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:55.514 15:22:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.514 15:22:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:55.514 [2024-11-20 15:22:41.867206] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:55.514 15:22:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.514 15:22:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:55.514 15:22:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:55.514 15:22:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:55.514 15:22:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:55.514 15:22:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:55.514 15:22:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:55.514 15:22:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:55.514 15:22:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:55.514 15:22:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:55.514 15:22:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:55.514 15:22:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:55.514 15:22:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:55.514 15:22:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.514 15:22:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:55.514 15:22:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.514 15:22:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:55.514 "name": "Existed_Raid", 00:14:55.514 "uuid": "a9b20e22-8483-42ca-9eae-4506f06da79a", 00:14:55.514 "strip_size_kb": 64, 00:14:55.514 "state": "configuring", 00:14:55.514 "raid_level": "raid5f", 00:14:55.514 "superblock": true, 00:14:55.514 "num_base_bdevs": 3, 00:14:55.514 "num_base_bdevs_discovered": 1, 00:14:55.514 "num_base_bdevs_operational": 3, 00:14:55.514 "base_bdevs_list": [ 00:14:55.514 { 00:14:55.514 "name": "BaseBdev1", 00:14:55.514 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:55.514 "is_configured": false, 00:14:55.514 "data_offset": 0, 00:14:55.514 "data_size": 0 00:14:55.514 }, 00:14:55.514 { 00:14:55.514 "name": null, 00:14:55.514 "uuid": "b307d859-c93c-4577-bd43-9e1610e11789", 00:14:55.514 "is_configured": false, 00:14:55.514 "data_offset": 0, 00:14:55.514 "data_size": 63488 00:14:55.514 }, 00:14:55.514 { 00:14:55.514 "name": "BaseBdev3", 00:14:55.514 "uuid": "6bea71a2-f498-4d5b-8fe8-1bcf1bbf6827", 00:14:55.514 "is_configured": true, 00:14:55.514 "data_offset": 2048, 00:14:55.514 "data_size": 63488 00:14:55.514 } 00:14:55.514 ] 00:14:55.514 }' 00:14:55.514 15:22:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:55.514 15:22:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.082 15:22:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:56.082 15:22:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:56.082 15:22:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.082 15:22:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.082 15:22:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.082 15:22:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:14:56.082 15:22:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:56.082 15:22:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.082 15:22:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.082 [2024-11-20 15:22:42.388406] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:56.082 BaseBdev1 00:14:56.082 15:22:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.082 15:22:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:14:56.082 15:22:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:14:56.082 15:22:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:56.082 15:22:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:56.082 15:22:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:56.082 15:22:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:56.082 15:22:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:56.082 15:22:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.082 15:22:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.082 15:22:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.082 15:22:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:56.082 15:22:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.082 15:22:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.082 [ 00:14:56.082 { 00:14:56.082 "name": "BaseBdev1", 00:14:56.082 "aliases": [ 00:14:56.082 "25a1cf5e-3683-4080-a2fc-4e8b26609507" 00:14:56.082 ], 00:14:56.082 "product_name": "Malloc disk", 00:14:56.082 "block_size": 512, 00:14:56.082 "num_blocks": 65536, 00:14:56.082 "uuid": "25a1cf5e-3683-4080-a2fc-4e8b26609507", 00:14:56.082 "assigned_rate_limits": { 00:14:56.082 "rw_ios_per_sec": 0, 00:14:56.082 "rw_mbytes_per_sec": 0, 00:14:56.082 "r_mbytes_per_sec": 0, 00:14:56.082 "w_mbytes_per_sec": 0 00:14:56.082 }, 00:14:56.082 "claimed": true, 00:14:56.082 "claim_type": "exclusive_write", 00:14:56.082 "zoned": false, 00:14:56.082 "supported_io_types": { 00:14:56.082 "read": true, 00:14:56.082 "write": true, 00:14:56.082 "unmap": true, 00:14:56.082 "flush": true, 00:14:56.082 "reset": true, 00:14:56.082 "nvme_admin": false, 00:14:56.082 "nvme_io": false, 00:14:56.082 "nvme_io_md": false, 00:14:56.082 "write_zeroes": true, 00:14:56.082 "zcopy": true, 00:14:56.082 "get_zone_info": false, 00:14:56.082 "zone_management": false, 00:14:56.082 "zone_append": false, 00:14:56.082 "compare": false, 00:14:56.082 "compare_and_write": false, 00:14:56.082 "abort": true, 00:14:56.082 "seek_hole": false, 00:14:56.082 "seek_data": false, 00:14:56.082 "copy": true, 00:14:56.082 "nvme_iov_md": false 00:14:56.082 }, 00:14:56.082 "memory_domains": [ 00:14:56.082 { 00:14:56.082 "dma_device_id": "system", 00:14:56.082 "dma_device_type": 1 00:14:56.082 }, 00:14:56.082 { 00:14:56.082 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:56.082 "dma_device_type": 2 00:14:56.082 } 00:14:56.082 ], 00:14:56.082 "driver_specific": {} 00:14:56.082 } 00:14:56.082 ] 00:14:56.083 15:22:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.083 15:22:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:56.083 15:22:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:56.083 15:22:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:56.083 15:22:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:56.083 15:22:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:56.083 15:22:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:56.083 15:22:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:56.083 15:22:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:56.083 15:22:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:56.083 15:22:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:56.083 15:22:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:56.083 15:22:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:56.083 15:22:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:56.083 15:22:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.083 15:22:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.083 15:22:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.083 15:22:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:56.083 "name": "Existed_Raid", 00:14:56.083 "uuid": "a9b20e22-8483-42ca-9eae-4506f06da79a", 00:14:56.083 "strip_size_kb": 64, 00:14:56.083 "state": "configuring", 00:14:56.083 "raid_level": "raid5f", 00:14:56.083 "superblock": true, 00:14:56.083 "num_base_bdevs": 3, 00:14:56.083 "num_base_bdevs_discovered": 2, 00:14:56.083 "num_base_bdevs_operational": 3, 00:14:56.083 "base_bdevs_list": [ 00:14:56.083 { 00:14:56.083 "name": "BaseBdev1", 00:14:56.083 "uuid": "25a1cf5e-3683-4080-a2fc-4e8b26609507", 00:14:56.083 "is_configured": true, 00:14:56.083 "data_offset": 2048, 00:14:56.083 "data_size": 63488 00:14:56.083 }, 00:14:56.083 { 00:14:56.083 "name": null, 00:14:56.083 "uuid": "b307d859-c93c-4577-bd43-9e1610e11789", 00:14:56.083 "is_configured": false, 00:14:56.083 "data_offset": 0, 00:14:56.083 "data_size": 63488 00:14:56.083 }, 00:14:56.083 { 00:14:56.083 "name": "BaseBdev3", 00:14:56.083 "uuid": "6bea71a2-f498-4d5b-8fe8-1bcf1bbf6827", 00:14:56.083 "is_configured": true, 00:14:56.083 "data_offset": 2048, 00:14:56.083 "data_size": 63488 00:14:56.083 } 00:14:56.083 ] 00:14:56.083 }' 00:14:56.083 15:22:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:56.083 15:22:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.655 15:22:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:56.655 15:22:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.655 15:22:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.656 15:22:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:56.656 15:22:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.656 15:22:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:14:56.656 15:22:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:14:56.656 15:22:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.656 15:22:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.656 [2024-11-20 15:22:42.943764] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:56.656 15:22:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.656 15:22:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:56.656 15:22:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:56.656 15:22:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:56.656 15:22:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:56.656 15:22:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:56.656 15:22:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:56.656 15:22:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:56.656 15:22:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:56.656 15:22:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:56.656 15:22:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:56.656 15:22:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:56.656 15:22:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.656 15:22:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:56.656 15:22:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.656 15:22:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.656 15:22:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:56.656 "name": "Existed_Raid", 00:14:56.656 "uuid": "a9b20e22-8483-42ca-9eae-4506f06da79a", 00:14:56.656 "strip_size_kb": 64, 00:14:56.656 "state": "configuring", 00:14:56.656 "raid_level": "raid5f", 00:14:56.656 "superblock": true, 00:14:56.656 "num_base_bdevs": 3, 00:14:56.656 "num_base_bdevs_discovered": 1, 00:14:56.656 "num_base_bdevs_operational": 3, 00:14:56.656 "base_bdevs_list": [ 00:14:56.656 { 00:14:56.656 "name": "BaseBdev1", 00:14:56.656 "uuid": "25a1cf5e-3683-4080-a2fc-4e8b26609507", 00:14:56.656 "is_configured": true, 00:14:56.656 "data_offset": 2048, 00:14:56.656 "data_size": 63488 00:14:56.656 }, 00:14:56.656 { 00:14:56.656 "name": null, 00:14:56.656 "uuid": "b307d859-c93c-4577-bd43-9e1610e11789", 00:14:56.656 "is_configured": false, 00:14:56.656 "data_offset": 0, 00:14:56.656 "data_size": 63488 00:14:56.656 }, 00:14:56.656 { 00:14:56.656 "name": null, 00:14:56.656 "uuid": "6bea71a2-f498-4d5b-8fe8-1bcf1bbf6827", 00:14:56.656 "is_configured": false, 00:14:56.656 "data_offset": 0, 00:14:56.656 "data_size": 63488 00:14:56.656 } 00:14:56.656 ] 00:14:56.656 }' 00:14:56.656 15:22:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:56.656 15:22:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.930 15:22:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:56.931 15:22:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.931 15:22:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:56.931 15:22:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.931 15:22:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.931 15:22:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:14:56.931 15:22:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:14:56.931 15:22:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.931 15:22:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.190 [2024-11-20 15:22:43.411117] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:57.190 15:22:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.190 15:22:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:57.190 15:22:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:57.190 15:22:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:57.190 15:22:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:57.190 15:22:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:57.190 15:22:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:57.190 15:22:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:57.190 15:22:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:57.190 15:22:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:57.190 15:22:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:57.190 15:22:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:57.190 15:22:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:57.190 15:22:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.190 15:22:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.190 15:22:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.190 15:22:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:57.190 "name": "Existed_Raid", 00:14:57.190 "uuid": "a9b20e22-8483-42ca-9eae-4506f06da79a", 00:14:57.190 "strip_size_kb": 64, 00:14:57.190 "state": "configuring", 00:14:57.190 "raid_level": "raid5f", 00:14:57.190 "superblock": true, 00:14:57.190 "num_base_bdevs": 3, 00:14:57.190 "num_base_bdevs_discovered": 2, 00:14:57.190 "num_base_bdevs_operational": 3, 00:14:57.190 "base_bdevs_list": [ 00:14:57.190 { 00:14:57.190 "name": "BaseBdev1", 00:14:57.190 "uuid": "25a1cf5e-3683-4080-a2fc-4e8b26609507", 00:14:57.190 "is_configured": true, 00:14:57.190 "data_offset": 2048, 00:14:57.190 "data_size": 63488 00:14:57.190 }, 00:14:57.190 { 00:14:57.190 "name": null, 00:14:57.190 "uuid": "b307d859-c93c-4577-bd43-9e1610e11789", 00:14:57.190 "is_configured": false, 00:14:57.190 "data_offset": 0, 00:14:57.190 "data_size": 63488 00:14:57.190 }, 00:14:57.190 { 00:14:57.190 "name": "BaseBdev3", 00:14:57.190 "uuid": "6bea71a2-f498-4d5b-8fe8-1bcf1bbf6827", 00:14:57.190 "is_configured": true, 00:14:57.190 "data_offset": 2048, 00:14:57.190 "data_size": 63488 00:14:57.190 } 00:14:57.190 ] 00:14:57.190 }' 00:14:57.190 15:22:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:57.190 15:22:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.450 15:22:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:57.450 15:22:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:57.450 15:22:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.450 15:22:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.450 15:22:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.450 15:22:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:14:57.450 15:22:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:57.450 15:22:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.450 15:22:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.450 [2024-11-20 15:22:43.878869] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:57.709 15:22:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.709 15:22:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:57.709 15:22:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:57.709 15:22:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:57.709 15:22:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:57.709 15:22:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:57.709 15:22:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:57.709 15:22:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:57.709 15:22:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:57.709 15:22:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:57.709 15:22:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:57.709 15:22:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:57.709 15:22:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:57.709 15:22:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.709 15:22:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.709 15:22:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.709 15:22:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:57.709 "name": "Existed_Raid", 00:14:57.709 "uuid": "a9b20e22-8483-42ca-9eae-4506f06da79a", 00:14:57.709 "strip_size_kb": 64, 00:14:57.709 "state": "configuring", 00:14:57.709 "raid_level": "raid5f", 00:14:57.709 "superblock": true, 00:14:57.709 "num_base_bdevs": 3, 00:14:57.709 "num_base_bdevs_discovered": 1, 00:14:57.709 "num_base_bdevs_operational": 3, 00:14:57.710 "base_bdevs_list": [ 00:14:57.710 { 00:14:57.710 "name": null, 00:14:57.710 "uuid": "25a1cf5e-3683-4080-a2fc-4e8b26609507", 00:14:57.710 "is_configured": false, 00:14:57.710 "data_offset": 0, 00:14:57.710 "data_size": 63488 00:14:57.710 }, 00:14:57.710 { 00:14:57.710 "name": null, 00:14:57.710 "uuid": "b307d859-c93c-4577-bd43-9e1610e11789", 00:14:57.710 "is_configured": false, 00:14:57.710 "data_offset": 0, 00:14:57.710 "data_size": 63488 00:14:57.710 }, 00:14:57.710 { 00:14:57.710 "name": "BaseBdev3", 00:14:57.710 "uuid": "6bea71a2-f498-4d5b-8fe8-1bcf1bbf6827", 00:14:57.710 "is_configured": true, 00:14:57.710 "data_offset": 2048, 00:14:57.710 "data_size": 63488 00:14:57.710 } 00:14:57.710 ] 00:14:57.710 }' 00:14:57.710 15:22:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:57.710 15:22:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.969 15:22:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:57.969 15:22:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.969 15:22:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.969 15:22:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:57.969 15:22:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.969 15:22:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:14:57.969 15:22:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:14:57.969 15:22:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.969 15:22:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.228 [2024-11-20 15:22:44.453915] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:58.228 15:22:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.228 15:22:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:58.228 15:22:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:58.228 15:22:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:58.228 15:22:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:58.228 15:22:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:58.228 15:22:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:58.228 15:22:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:58.228 15:22:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:58.228 15:22:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:58.228 15:22:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:58.228 15:22:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:58.228 15:22:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:58.228 15:22:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.228 15:22:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.228 15:22:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.228 15:22:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:58.228 "name": "Existed_Raid", 00:14:58.228 "uuid": "a9b20e22-8483-42ca-9eae-4506f06da79a", 00:14:58.228 "strip_size_kb": 64, 00:14:58.228 "state": "configuring", 00:14:58.228 "raid_level": "raid5f", 00:14:58.228 "superblock": true, 00:14:58.228 "num_base_bdevs": 3, 00:14:58.228 "num_base_bdevs_discovered": 2, 00:14:58.228 "num_base_bdevs_operational": 3, 00:14:58.228 "base_bdevs_list": [ 00:14:58.228 { 00:14:58.228 "name": null, 00:14:58.228 "uuid": "25a1cf5e-3683-4080-a2fc-4e8b26609507", 00:14:58.228 "is_configured": false, 00:14:58.228 "data_offset": 0, 00:14:58.228 "data_size": 63488 00:14:58.228 }, 00:14:58.228 { 00:14:58.228 "name": "BaseBdev2", 00:14:58.228 "uuid": "b307d859-c93c-4577-bd43-9e1610e11789", 00:14:58.228 "is_configured": true, 00:14:58.228 "data_offset": 2048, 00:14:58.228 "data_size": 63488 00:14:58.228 }, 00:14:58.228 { 00:14:58.228 "name": "BaseBdev3", 00:14:58.228 "uuid": "6bea71a2-f498-4d5b-8fe8-1bcf1bbf6827", 00:14:58.228 "is_configured": true, 00:14:58.228 "data_offset": 2048, 00:14:58.228 "data_size": 63488 00:14:58.228 } 00:14:58.228 ] 00:14:58.228 }' 00:14:58.228 15:22:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:58.228 15:22:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.488 15:22:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:58.488 15:22:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.488 15:22:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.488 15:22:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:58.488 15:22:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.488 15:22:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:14:58.488 15:22:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:14:58.488 15:22:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:58.488 15:22:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.488 15:22:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.488 15:22:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.488 15:22:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 25a1cf5e-3683-4080-a2fc-4e8b26609507 00:14:58.488 15:22:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.488 15:22:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.747 [2024-11-20 15:22:45.002030] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:14:58.747 [2024-11-20 15:22:45.002268] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:14:58.747 [2024-11-20 15:22:45.002287] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:58.747 [2024-11-20 15:22:45.002539] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:14:58.747 NewBaseBdev 00:14:58.747 15:22:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.748 15:22:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:14:58.748 15:22:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:14:58.748 15:22:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:58.748 15:22:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:58.748 15:22:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:58.748 15:22:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:58.748 15:22:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:58.748 15:22:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.748 15:22:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.748 [2024-11-20 15:22:45.008322] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:14:58.748 [2024-11-20 15:22:45.008346] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:14:58.748 [2024-11-20 15:22:45.008626] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:58.748 15:22:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.748 15:22:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:14:58.748 15:22:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.748 15:22:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.748 [ 00:14:58.748 { 00:14:58.748 "name": "NewBaseBdev", 00:14:58.748 "aliases": [ 00:14:58.748 "25a1cf5e-3683-4080-a2fc-4e8b26609507" 00:14:58.748 ], 00:14:58.748 "product_name": "Malloc disk", 00:14:58.748 "block_size": 512, 00:14:58.748 "num_blocks": 65536, 00:14:58.748 "uuid": "25a1cf5e-3683-4080-a2fc-4e8b26609507", 00:14:58.748 "assigned_rate_limits": { 00:14:58.748 "rw_ios_per_sec": 0, 00:14:58.748 "rw_mbytes_per_sec": 0, 00:14:58.748 "r_mbytes_per_sec": 0, 00:14:58.748 "w_mbytes_per_sec": 0 00:14:58.748 }, 00:14:58.748 "claimed": true, 00:14:58.748 "claim_type": "exclusive_write", 00:14:58.748 "zoned": false, 00:14:58.748 "supported_io_types": { 00:14:58.748 "read": true, 00:14:58.748 "write": true, 00:14:58.748 "unmap": true, 00:14:58.748 "flush": true, 00:14:58.748 "reset": true, 00:14:58.748 "nvme_admin": false, 00:14:58.748 "nvme_io": false, 00:14:58.748 "nvme_io_md": false, 00:14:58.748 "write_zeroes": true, 00:14:58.748 "zcopy": true, 00:14:58.748 "get_zone_info": false, 00:14:58.748 "zone_management": false, 00:14:58.748 "zone_append": false, 00:14:58.748 "compare": false, 00:14:58.748 "compare_and_write": false, 00:14:58.748 "abort": true, 00:14:58.748 "seek_hole": false, 00:14:58.748 "seek_data": false, 00:14:58.748 "copy": true, 00:14:58.748 "nvme_iov_md": false 00:14:58.748 }, 00:14:58.748 "memory_domains": [ 00:14:58.748 { 00:14:58.748 "dma_device_id": "system", 00:14:58.748 "dma_device_type": 1 00:14:58.748 }, 00:14:58.748 { 00:14:58.748 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:58.748 "dma_device_type": 2 00:14:58.748 } 00:14:58.748 ], 00:14:58.748 "driver_specific": {} 00:14:58.748 } 00:14:58.748 ] 00:14:58.748 15:22:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.748 15:22:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:58.748 15:22:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:14:58.748 15:22:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:58.748 15:22:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:58.748 15:22:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:58.748 15:22:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:58.748 15:22:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:58.748 15:22:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:58.748 15:22:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:58.748 15:22:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:58.748 15:22:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:58.748 15:22:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:58.748 15:22:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.748 15:22:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:58.748 15:22:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.748 15:22:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.748 15:22:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:58.748 "name": "Existed_Raid", 00:14:58.748 "uuid": "a9b20e22-8483-42ca-9eae-4506f06da79a", 00:14:58.748 "strip_size_kb": 64, 00:14:58.748 "state": "online", 00:14:58.748 "raid_level": "raid5f", 00:14:58.748 "superblock": true, 00:14:58.748 "num_base_bdevs": 3, 00:14:58.748 "num_base_bdevs_discovered": 3, 00:14:58.748 "num_base_bdevs_operational": 3, 00:14:58.748 "base_bdevs_list": [ 00:14:58.748 { 00:14:58.748 "name": "NewBaseBdev", 00:14:58.748 "uuid": "25a1cf5e-3683-4080-a2fc-4e8b26609507", 00:14:58.748 "is_configured": true, 00:14:58.748 "data_offset": 2048, 00:14:58.748 "data_size": 63488 00:14:58.748 }, 00:14:58.748 { 00:14:58.748 "name": "BaseBdev2", 00:14:58.748 "uuid": "b307d859-c93c-4577-bd43-9e1610e11789", 00:14:58.748 "is_configured": true, 00:14:58.748 "data_offset": 2048, 00:14:58.748 "data_size": 63488 00:14:58.748 }, 00:14:58.748 { 00:14:58.748 "name": "BaseBdev3", 00:14:58.748 "uuid": "6bea71a2-f498-4d5b-8fe8-1bcf1bbf6827", 00:14:58.748 "is_configured": true, 00:14:58.748 "data_offset": 2048, 00:14:58.748 "data_size": 63488 00:14:58.748 } 00:14:58.748 ] 00:14:58.748 }' 00:14:58.748 15:22:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:58.748 15:22:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.007 15:22:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:14:59.007 15:22:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:59.007 15:22:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:59.007 15:22:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:59.007 15:22:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:14:59.007 15:22:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:59.007 15:22:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:59.007 15:22:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:59.007 15:22:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.007 15:22:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.007 [2024-11-20 15:22:45.471016] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:59.267 15:22:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.267 15:22:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:59.267 "name": "Existed_Raid", 00:14:59.267 "aliases": [ 00:14:59.267 "a9b20e22-8483-42ca-9eae-4506f06da79a" 00:14:59.267 ], 00:14:59.267 "product_name": "Raid Volume", 00:14:59.267 "block_size": 512, 00:14:59.267 "num_blocks": 126976, 00:14:59.267 "uuid": "a9b20e22-8483-42ca-9eae-4506f06da79a", 00:14:59.267 "assigned_rate_limits": { 00:14:59.267 "rw_ios_per_sec": 0, 00:14:59.267 "rw_mbytes_per_sec": 0, 00:14:59.267 "r_mbytes_per_sec": 0, 00:14:59.267 "w_mbytes_per_sec": 0 00:14:59.267 }, 00:14:59.267 "claimed": false, 00:14:59.267 "zoned": false, 00:14:59.267 "supported_io_types": { 00:14:59.267 "read": true, 00:14:59.267 "write": true, 00:14:59.267 "unmap": false, 00:14:59.267 "flush": false, 00:14:59.267 "reset": true, 00:14:59.267 "nvme_admin": false, 00:14:59.267 "nvme_io": false, 00:14:59.267 "nvme_io_md": false, 00:14:59.267 "write_zeroes": true, 00:14:59.267 "zcopy": false, 00:14:59.267 "get_zone_info": false, 00:14:59.267 "zone_management": false, 00:14:59.267 "zone_append": false, 00:14:59.267 "compare": false, 00:14:59.267 "compare_and_write": false, 00:14:59.267 "abort": false, 00:14:59.267 "seek_hole": false, 00:14:59.267 "seek_data": false, 00:14:59.267 "copy": false, 00:14:59.267 "nvme_iov_md": false 00:14:59.267 }, 00:14:59.267 "driver_specific": { 00:14:59.267 "raid": { 00:14:59.267 "uuid": "a9b20e22-8483-42ca-9eae-4506f06da79a", 00:14:59.267 "strip_size_kb": 64, 00:14:59.267 "state": "online", 00:14:59.267 "raid_level": "raid5f", 00:14:59.267 "superblock": true, 00:14:59.267 "num_base_bdevs": 3, 00:14:59.267 "num_base_bdevs_discovered": 3, 00:14:59.267 "num_base_bdevs_operational": 3, 00:14:59.267 "base_bdevs_list": [ 00:14:59.267 { 00:14:59.267 "name": "NewBaseBdev", 00:14:59.267 "uuid": "25a1cf5e-3683-4080-a2fc-4e8b26609507", 00:14:59.267 "is_configured": true, 00:14:59.267 "data_offset": 2048, 00:14:59.267 "data_size": 63488 00:14:59.267 }, 00:14:59.267 { 00:14:59.267 "name": "BaseBdev2", 00:14:59.267 "uuid": "b307d859-c93c-4577-bd43-9e1610e11789", 00:14:59.267 "is_configured": true, 00:14:59.267 "data_offset": 2048, 00:14:59.267 "data_size": 63488 00:14:59.267 }, 00:14:59.267 { 00:14:59.267 "name": "BaseBdev3", 00:14:59.267 "uuid": "6bea71a2-f498-4d5b-8fe8-1bcf1bbf6827", 00:14:59.267 "is_configured": true, 00:14:59.267 "data_offset": 2048, 00:14:59.267 "data_size": 63488 00:14:59.267 } 00:14:59.267 ] 00:14:59.267 } 00:14:59.267 } 00:14:59.267 }' 00:14:59.267 15:22:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:59.267 15:22:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:14:59.267 BaseBdev2 00:14:59.267 BaseBdev3' 00:14:59.267 15:22:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:59.267 15:22:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:59.267 15:22:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:59.267 15:22:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:14:59.267 15:22:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:59.267 15:22:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.267 15:22:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.267 15:22:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.267 15:22:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:59.267 15:22:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:59.267 15:22:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:59.267 15:22:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:59.268 15:22:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:59.268 15:22:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.268 15:22:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.268 15:22:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.268 15:22:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:59.268 15:22:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:59.268 15:22:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:59.268 15:22:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:59.268 15:22:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.268 15:22:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.268 15:22:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:59.268 15:22:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.268 15:22:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:59.268 15:22:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:59.268 15:22:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:59.268 15:22:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.268 15:22:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.268 [2024-11-20 15:22:45.742833] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:59.268 [2024-11-20 15:22:45.742999] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:59.268 [2024-11-20 15:22:45.743121] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:59.268 [2024-11-20 15:22:45.743409] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:59.268 [2024-11-20 15:22:45.743425] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:14:59.268 15:22:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.527 15:22:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 80331 00:14:59.527 15:22:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 80331 ']' 00:14:59.527 15:22:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 80331 00:14:59.527 15:22:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:14:59.527 15:22:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:59.527 15:22:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80331 00:14:59.527 15:22:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:59.527 15:22:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:59.527 15:22:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80331' 00:14:59.527 killing process with pid 80331 00:14:59.527 15:22:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 80331 00:14:59.527 [2024-11-20 15:22:45.796966] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:59.527 15:22:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 80331 00:14:59.786 [2024-11-20 15:22:46.103119] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:01.164 15:22:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:15:01.165 00:15:01.165 real 0m10.486s 00:15:01.165 user 0m16.557s 00:15:01.165 sys 0m2.234s 00:15:01.165 15:22:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:01.165 ************************************ 00:15:01.165 END TEST raid5f_state_function_test_sb 00:15:01.165 ************************************ 00:15:01.165 15:22:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.165 15:22:47 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 3 00:15:01.165 15:22:47 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:15:01.165 15:22:47 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:01.165 15:22:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:01.165 ************************************ 00:15:01.165 START TEST raid5f_superblock_test 00:15:01.165 ************************************ 00:15:01.165 15:22:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid5f 3 00:15:01.165 15:22:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:15:01.165 15:22:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:15:01.165 15:22:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:15:01.165 15:22:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:15:01.165 15:22:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:15:01.165 15:22:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:15:01.165 15:22:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:15:01.165 15:22:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:15:01.165 15:22:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:15:01.165 15:22:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:15:01.165 15:22:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:15:01.165 15:22:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:15:01.165 15:22:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:15:01.165 15:22:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:15:01.165 15:22:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:15:01.165 15:22:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:15:01.165 15:22:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=80946 00:15:01.165 15:22:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:15:01.165 15:22:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 80946 00:15:01.165 15:22:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 80946 ']' 00:15:01.165 15:22:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:01.165 15:22:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:01.165 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:01.165 15:22:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:01.165 15:22:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:01.165 15:22:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.165 [2024-11-20 15:22:47.432520] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:15:01.165 [2024-11-20 15:22:47.432674] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80946 ] 00:15:01.165 [2024-11-20 15:22:47.611150] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:01.424 [2024-11-20 15:22:47.733394] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:01.683 [2024-11-20 15:22:47.921915] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:01.683 [2024-11-20 15:22:47.921983] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:01.943 15:22:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:01.943 15:22:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:15:01.943 15:22:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:15:01.943 15:22:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:01.943 15:22:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:15:01.943 15:22:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:15:01.943 15:22:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:15:01.943 15:22:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:01.943 15:22:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:01.943 15:22:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:01.943 15:22:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:15:01.943 15:22:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.943 15:22:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.943 malloc1 00:15:01.943 15:22:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.943 15:22:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:01.943 15:22:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.943 15:22:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.943 [2024-11-20 15:22:48.326387] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:01.943 [2024-11-20 15:22:48.326604] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:01.943 [2024-11-20 15:22:48.326676] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:01.943 [2024-11-20 15:22:48.326783] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:01.943 [2024-11-20 15:22:48.329285] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:01.943 [2024-11-20 15:22:48.329433] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:01.943 pt1 00:15:01.943 15:22:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.943 15:22:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:01.943 15:22:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:01.943 15:22:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:15:01.943 15:22:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:15:01.943 15:22:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:15:01.943 15:22:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:01.943 15:22:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:01.943 15:22:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:01.943 15:22:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:15:01.943 15:22:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.943 15:22:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.943 malloc2 00:15:01.943 15:22:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.943 15:22:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:01.943 15:22:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.943 15:22:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.943 [2024-11-20 15:22:48.383782] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:01.943 [2024-11-20 15:22:48.383849] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:01.943 [2024-11-20 15:22:48.383881] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:01.943 [2024-11-20 15:22:48.383892] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:01.943 [2024-11-20 15:22:48.386259] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:01.943 [2024-11-20 15:22:48.386301] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:01.943 pt2 00:15:01.943 15:22:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.943 15:22:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:01.943 15:22:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:01.943 15:22:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:15:01.943 15:22:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:15:01.943 15:22:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:15:01.943 15:22:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:01.943 15:22:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:01.943 15:22:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:01.943 15:22:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:15:01.943 15:22:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.943 15:22:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.219 malloc3 00:15:02.219 15:22:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.219 15:22:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:02.219 15:22:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.219 15:22:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.219 [2024-11-20 15:22:48.458148] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:02.219 [2024-11-20 15:22:48.458408] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:02.219 [2024-11-20 15:22:48.458444] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:02.219 [2024-11-20 15:22:48.458457] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:02.219 [2024-11-20 15:22:48.461035] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:02.219 [2024-11-20 15:22:48.461079] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:02.219 pt3 00:15:02.219 15:22:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.219 15:22:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:02.219 15:22:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:02.219 15:22:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:15:02.219 15:22:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.219 15:22:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.219 [2024-11-20 15:22:48.470175] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:02.219 [2024-11-20 15:22:48.472491] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:02.219 [2024-11-20 15:22:48.472715] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:02.219 [2024-11-20 15:22:48.473020] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:02.219 [2024-11-20 15:22:48.473129] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:02.219 [2024-11-20 15:22:48.473454] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:15:02.219 [2024-11-20 15:22:48.479198] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:02.219 [2024-11-20 15:22:48.479342] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:02.219 [2024-11-20 15:22:48.479711] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:02.219 15:22:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.219 15:22:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:02.219 15:22:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:02.219 15:22:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:02.219 15:22:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:02.219 15:22:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:02.219 15:22:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:02.219 15:22:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:02.219 15:22:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:02.219 15:22:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:02.219 15:22:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:02.219 15:22:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:02.219 15:22:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.219 15:22:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.219 15:22:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:02.219 15:22:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.219 15:22:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:02.219 "name": "raid_bdev1", 00:15:02.219 "uuid": "c3d75102-d6fa-4b4f-b6e7-5b7679db3809", 00:15:02.219 "strip_size_kb": 64, 00:15:02.219 "state": "online", 00:15:02.219 "raid_level": "raid5f", 00:15:02.219 "superblock": true, 00:15:02.219 "num_base_bdevs": 3, 00:15:02.219 "num_base_bdevs_discovered": 3, 00:15:02.219 "num_base_bdevs_operational": 3, 00:15:02.219 "base_bdevs_list": [ 00:15:02.219 { 00:15:02.219 "name": "pt1", 00:15:02.219 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:02.219 "is_configured": true, 00:15:02.219 "data_offset": 2048, 00:15:02.219 "data_size": 63488 00:15:02.219 }, 00:15:02.219 { 00:15:02.219 "name": "pt2", 00:15:02.219 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:02.219 "is_configured": true, 00:15:02.219 "data_offset": 2048, 00:15:02.219 "data_size": 63488 00:15:02.219 }, 00:15:02.219 { 00:15:02.219 "name": "pt3", 00:15:02.219 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:02.219 "is_configured": true, 00:15:02.219 "data_offset": 2048, 00:15:02.219 "data_size": 63488 00:15:02.219 } 00:15:02.219 ] 00:15:02.219 }' 00:15:02.219 15:22:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:02.219 15:22:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.517 15:22:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:15:02.517 15:22:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:02.517 15:22:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:02.517 15:22:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:02.517 15:22:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:02.517 15:22:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:02.517 15:22:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:02.517 15:22:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.517 15:22:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.517 15:22:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:02.517 [2024-11-20 15:22:48.926001] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:02.517 15:22:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.517 15:22:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:02.517 "name": "raid_bdev1", 00:15:02.517 "aliases": [ 00:15:02.517 "c3d75102-d6fa-4b4f-b6e7-5b7679db3809" 00:15:02.517 ], 00:15:02.517 "product_name": "Raid Volume", 00:15:02.517 "block_size": 512, 00:15:02.517 "num_blocks": 126976, 00:15:02.517 "uuid": "c3d75102-d6fa-4b4f-b6e7-5b7679db3809", 00:15:02.517 "assigned_rate_limits": { 00:15:02.517 "rw_ios_per_sec": 0, 00:15:02.517 "rw_mbytes_per_sec": 0, 00:15:02.517 "r_mbytes_per_sec": 0, 00:15:02.517 "w_mbytes_per_sec": 0 00:15:02.517 }, 00:15:02.517 "claimed": false, 00:15:02.517 "zoned": false, 00:15:02.517 "supported_io_types": { 00:15:02.517 "read": true, 00:15:02.517 "write": true, 00:15:02.517 "unmap": false, 00:15:02.517 "flush": false, 00:15:02.517 "reset": true, 00:15:02.517 "nvme_admin": false, 00:15:02.517 "nvme_io": false, 00:15:02.517 "nvme_io_md": false, 00:15:02.517 "write_zeroes": true, 00:15:02.517 "zcopy": false, 00:15:02.517 "get_zone_info": false, 00:15:02.517 "zone_management": false, 00:15:02.517 "zone_append": false, 00:15:02.517 "compare": false, 00:15:02.517 "compare_and_write": false, 00:15:02.517 "abort": false, 00:15:02.517 "seek_hole": false, 00:15:02.517 "seek_data": false, 00:15:02.517 "copy": false, 00:15:02.517 "nvme_iov_md": false 00:15:02.517 }, 00:15:02.517 "driver_specific": { 00:15:02.517 "raid": { 00:15:02.517 "uuid": "c3d75102-d6fa-4b4f-b6e7-5b7679db3809", 00:15:02.517 "strip_size_kb": 64, 00:15:02.517 "state": "online", 00:15:02.517 "raid_level": "raid5f", 00:15:02.517 "superblock": true, 00:15:02.517 "num_base_bdevs": 3, 00:15:02.517 "num_base_bdevs_discovered": 3, 00:15:02.517 "num_base_bdevs_operational": 3, 00:15:02.517 "base_bdevs_list": [ 00:15:02.517 { 00:15:02.517 "name": "pt1", 00:15:02.517 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:02.517 "is_configured": true, 00:15:02.517 "data_offset": 2048, 00:15:02.517 "data_size": 63488 00:15:02.517 }, 00:15:02.517 { 00:15:02.517 "name": "pt2", 00:15:02.517 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:02.517 "is_configured": true, 00:15:02.517 "data_offset": 2048, 00:15:02.517 "data_size": 63488 00:15:02.517 }, 00:15:02.517 { 00:15:02.517 "name": "pt3", 00:15:02.517 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:02.517 "is_configured": true, 00:15:02.517 "data_offset": 2048, 00:15:02.517 "data_size": 63488 00:15:02.517 } 00:15:02.517 ] 00:15:02.517 } 00:15:02.517 } 00:15:02.517 }' 00:15:02.517 15:22:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:02.777 15:22:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:02.777 pt2 00:15:02.777 pt3' 00:15:02.777 15:22:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:02.777 15:22:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:02.777 15:22:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:02.777 15:22:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:02.777 15:22:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.777 15:22:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.777 15:22:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:02.777 15:22:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.777 15:22:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:02.777 15:22:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:02.777 15:22:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:02.777 15:22:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:02.777 15:22:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.777 15:22:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.777 15:22:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:02.777 15:22:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.777 15:22:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:02.777 15:22:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:02.777 15:22:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:02.777 15:22:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:02.777 15:22:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:15:02.777 15:22:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.777 15:22:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.777 15:22:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.777 15:22:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:02.777 15:22:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:02.777 15:22:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:15:02.777 15:22:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:02.777 15:22:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.777 15:22:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.777 [2024-11-20 15:22:49.177827] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:02.777 15:22:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.777 15:22:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=c3d75102-d6fa-4b4f-b6e7-5b7679db3809 00:15:02.777 15:22:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z c3d75102-d6fa-4b4f-b6e7-5b7679db3809 ']' 00:15:02.777 15:22:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:02.777 15:22:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.777 15:22:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.777 [2024-11-20 15:22:49.221584] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:02.777 [2024-11-20 15:22:49.221625] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:02.777 [2024-11-20 15:22:49.221720] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:02.777 [2024-11-20 15:22:49.221796] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:02.777 [2024-11-20 15:22:49.221808] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:02.777 15:22:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.777 15:22:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:02.777 15:22:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.777 15:22:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.777 15:22:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:15:02.777 15:22:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.038 15:22:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:15:03.038 15:22:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:15:03.038 15:22:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:03.038 15:22:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:15:03.038 15:22:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.038 15:22:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.038 15:22:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.038 15:22:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:03.038 15:22:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:15:03.038 15:22:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.038 15:22:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.038 15:22:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.038 15:22:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:03.038 15:22:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:15:03.038 15:22:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.038 15:22:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.038 15:22:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.038 15:22:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:15:03.038 15:22:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:15:03.038 15:22:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.038 15:22:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.038 15:22:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.038 15:22:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:15:03.038 15:22:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:15:03.038 15:22:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:15:03.038 15:22:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:15:03.038 15:22:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:15:03.038 15:22:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:03.038 15:22:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:15:03.038 15:22:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:03.038 15:22:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:15:03.038 15:22:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.038 15:22:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.038 [2024-11-20 15:22:49.365435] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:15:03.038 [2024-11-20 15:22:49.367707] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:15:03.038 [2024-11-20 15:22:49.367767] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:15:03.038 [2024-11-20 15:22:49.367823] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:15:03.038 [2024-11-20 15:22:49.367880] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:15:03.038 [2024-11-20 15:22:49.367903] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:15:03.038 [2024-11-20 15:22:49.367926] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:03.039 [2024-11-20 15:22:49.367937] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:15:03.039 request: 00:15:03.039 { 00:15:03.039 "name": "raid_bdev1", 00:15:03.039 "raid_level": "raid5f", 00:15:03.039 "base_bdevs": [ 00:15:03.039 "malloc1", 00:15:03.039 "malloc2", 00:15:03.039 "malloc3" 00:15:03.039 ], 00:15:03.039 "strip_size_kb": 64, 00:15:03.039 "superblock": false, 00:15:03.039 "method": "bdev_raid_create", 00:15:03.039 "req_id": 1 00:15:03.039 } 00:15:03.039 Got JSON-RPC error response 00:15:03.039 response: 00:15:03.039 { 00:15:03.039 "code": -17, 00:15:03.039 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:15:03.039 } 00:15:03.039 15:22:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:15:03.039 15:22:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:15:03.039 15:22:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:03.039 15:22:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:03.039 15:22:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:03.039 15:22:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:03.039 15:22:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.039 15:22:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:15:03.039 15:22:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.039 15:22:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.039 15:22:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:15:03.039 15:22:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:15:03.039 15:22:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:03.039 15:22:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.039 15:22:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.039 [2024-11-20 15:22:49.433280] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:03.039 [2024-11-20 15:22:49.433352] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:03.039 [2024-11-20 15:22:49.433375] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:15:03.039 [2024-11-20 15:22:49.433386] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:03.039 [2024-11-20 15:22:49.436000] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:03.039 [2024-11-20 15:22:49.436044] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:03.039 [2024-11-20 15:22:49.436138] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:03.039 [2024-11-20 15:22:49.436193] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:03.039 pt1 00:15:03.039 15:22:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.039 15:22:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:15:03.039 15:22:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:03.039 15:22:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:03.039 15:22:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:03.039 15:22:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:03.039 15:22:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:03.039 15:22:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:03.039 15:22:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:03.039 15:22:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:03.039 15:22:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:03.039 15:22:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:03.039 15:22:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:03.039 15:22:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.039 15:22:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.039 15:22:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.039 15:22:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:03.039 "name": "raid_bdev1", 00:15:03.039 "uuid": "c3d75102-d6fa-4b4f-b6e7-5b7679db3809", 00:15:03.039 "strip_size_kb": 64, 00:15:03.039 "state": "configuring", 00:15:03.039 "raid_level": "raid5f", 00:15:03.039 "superblock": true, 00:15:03.039 "num_base_bdevs": 3, 00:15:03.039 "num_base_bdevs_discovered": 1, 00:15:03.039 "num_base_bdevs_operational": 3, 00:15:03.039 "base_bdevs_list": [ 00:15:03.039 { 00:15:03.039 "name": "pt1", 00:15:03.039 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:03.039 "is_configured": true, 00:15:03.039 "data_offset": 2048, 00:15:03.039 "data_size": 63488 00:15:03.039 }, 00:15:03.039 { 00:15:03.039 "name": null, 00:15:03.039 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:03.039 "is_configured": false, 00:15:03.039 "data_offset": 2048, 00:15:03.039 "data_size": 63488 00:15:03.039 }, 00:15:03.039 { 00:15:03.039 "name": null, 00:15:03.039 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:03.039 "is_configured": false, 00:15:03.039 "data_offset": 2048, 00:15:03.039 "data_size": 63488 00:15:03.039 } 00:15:03.039 ] 00:15:03.039 }' 00:15:03.039 15:22:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:03.039 15:22:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.608 15:22:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:15:03.608 15:22:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:03.608 15:22:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.608 15:22:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.608 [2024-11-20 15:22:49.844752] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:03.608 [2024-11-20 15:22:49.844834] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:03.608 [2024-11-20 15:22:49.844861] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:15:03.608 [2024-11-20 15:22:49.844872] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:03.608 [2024-11-20 15:22:49.845325] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:03.608 [2024-11-20 15:22:49.845354] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:03.608 [2024-11-20 15:22:49.845445] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:03.608 [2024-11-20 15:22:49.845474] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:03.608 pt2 00:15:03.608 15:22:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.608 15:22:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:15:03.608 15:22:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.608 15:22:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.608 [2024-11-20 15:22:49.852755] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:15:03.608 15:22:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.608 15:22:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:15:03.608 15:22:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:03.608 15:22:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:03.608 15:22:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:03.608 15:22:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:03.608 15:22:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:03.608 15:22:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:03.608 15:22:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:03.608 15:22:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:03.608 15:22:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:03.608 15:22:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:03.608 15:22:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:03.608 15:22:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.608 15:22:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.608 15:22:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.608 15:22:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:03.608 "name": "raid_bdev1", 00:15:03.608 "uuid": "c3d75102-d6fa-4b4f-b6e7-5b7679db3809", 00:15:03.608 "strip_size_kb": 64, 00:15:03.608 "state": "configuring", 00:15:03.608 "raid_level": "raid5f", 00:15:03.608 "superblock": true, 00:15:03.608 "num_base_bdevs": 3, 00:15:03.608 "num_base_bdevs_discovered": 1, 00:15:03.608 "num_base_bdevs_operational": 3, 00:15:03.608 "base_bdevs_list": [ 00:15:03.608 { 00:15:03.608 "name": "pt1", 00:15:03.608 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:03.608 "is_configured": true, 00:15:03.608 "data_offset": 2048, 00:15:03.608 "data_size": 63488 00:15:03.608 }, 00:15:03.608 { 00:15:03.608 "name": null, 00:15:03.608 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:03.608 "is_configured": false, 00:15:03.608 "data_offset": 0, 00:15:03.608 "data_size": 63488 00:15:03.608 }, 00:15:03.608 { 00:15:03.608 "name": null, 00:15:03.608 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:03.608 "is_configured": false, 00:15:03.608 "data_offset": 2048, 00:15:03.608 "data_size": 63488 00:15:03.608 } 00:15:03.608 ] 00:15:03.608 }' 00:15:03.608 15:22:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:03.608 15:22:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.868 15:22:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:15:03.868 15:22:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:03.868 15:22:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:03.868 15:22:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.868 15:22:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.868 [2024-11-20 15:22:50.308052] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:03.868 [2024-11-20 15:22:50.308133] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:03.868 [2024-11-20 15:22:50.308153] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:15:03.868 [2024-11-20 15:22:50.308167] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:03.868 [2024-11-20 15:22:50.308636] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:03.868 [2024-11-20 15:22:50.308672] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:03.868 [2024-11-20 15:22:50.308761] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:03.868 [2024-11-20 15:22:50.308786] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:03.868 pt2 00:15:03.868 15:22:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.868 15:22:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:03.868 15:22:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:03.868 15:22:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:03.868 15:22:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.868 15:22:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.868 [2024-11-20 15:22:50.320034] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:03.868 [2024-11-20 15:22:50.320107] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:03.868 [2024-11-20 15:22:50.320126] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:15:03.868 [2024-11-20 15:22:50.320139] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:03.868 [2024-11-20 15:22:50.320571] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:03.868 [2024-11-20 15:22:50.320596] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:03.868 [2024-11-20 15:22:50.320687] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:15:03.868 [2024-11-20 15:22:50.320714] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:03.868 [2024-11-20 15:22:50.320846] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:03.868 [2024-11-20 15:22:50.320860] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:03.868 [2024-11-20 15:22:50.321107] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:03.868 [2024-11-20 15:22:50.326378] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:03.868 [2024-11-20 15:22:50.326554] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:15:03.868 [2024-11-20 15:22:50.326849] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:03.868 pt3 00:15:03.868 15:22:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.868 15:22:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:03.868 15:22:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:03.868 15:22:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:03.868 15:22:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:03.868 15:22:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:03.868 15:22:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:03.868 15:22:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:03.868 15:22:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:03.868 15:22:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:03.868 15:22:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:03.868 15:22:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:03.868 15:22:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:03.868 15:22:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:03.868 15:22:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:03.868 15:22:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.868 15:22:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.143 15:22:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.143 15:22:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:04.143 "name": "raid_bdev1", 00:15:04.143 "uuid": "c3d75102-d6fa-4b4f-b6e7-5b7679db3809", 00:15:04.143 "strip_size_kb": 64, 00:15:04.143 "state": "online", 00:15:04.143 "raid_level": "raid5f", 00:15:04.143 "superblock": true, 00:15:04.143 "num_base_bdevs": 3, 00:15:04.143 "num_base_bdevs_discovered": 3, 00:15:04.143 "num_base_bdevs_operational": 3, 00:15:04.143 "base_bdevs_list": [ 00:15:04.143 { 00:15:04.143 "name": "pt1", 00:15:04.143 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:04.143 "is_configured": true, 00:15:04.143 "data_offset": 2048, 00:15:04.143 "data_size": 63488 00:15:04.143 }, 00:15:04.143 { 00:15:04.143 "name": "pt2", 00:15:04.143 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:04.143 "is_configured": true, 00:15:04.143 "data_offset": 2048, 00:15:04.143 "data_size": 63488 00:15:04.143 }, 00:15:04.143 { 00:15:04.143 "name": "pt3", 00:15:04.143 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:04.143 "is_configured": true, 00:15:04.143 "data_offset": 2048, 00:15:04.143 "data_size": 63488 00:15:04.143 } 00:15:04.143 ] 00:15:04.143 }' 00:15:04.143 15:22:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:04.143 15:22:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.404 15:22:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:15:04.404 15:22:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:04.404 15:22:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:04.404 15:22:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:04.404 15:22:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:04.404 15:22:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:04.404 15:22:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:04.404 15:22:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:04.404 15:22:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.404 15:22:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.404 [2024-11-20 15:22:50.764999] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:04.404 15:22:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.404 15:22:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:04.404 "name": "raid_bdev1", 00:15:04.404 "aliases": [ 00:15:04.404 "c3d75102-d6fa-4b4f-b6e7-5b7679db3809" 00:15:04.404 ], 00:15:04.404 "product_name": "Raid Volume", 00:15:04.404 "block_size": 512, 00:15:04.404 "num_blocks": 126976, 00:15:04.404 "uuid": "c3d75102-d6fa-4b4f-b6e7-5b7679db3809", 00:15:04.404 "assigned_rate_limits": { 00:15:04.404 "rw_ios_per_sec": 0, 00:15:04.404 "rw_mbytes_per_sec": 0, 00:15:04.404 "r_mbytes_per_sec": 0, 00:15:04.404 "w_mbytes_per_sec": 0 00:15:04.404 }, 00:15:04.404 "claimed": false, 00:15:04.404 "zoned": false, 00:15:04.404 "supported_io_types": { 00:15:04.404 "read": true, 00:15:04.404 "write": true, 00:15:04.404 "unmap": false, 00:15:04.404 "flush": false, 00:15:04.404 "reset": true, 00:15:04.404 "nvme_admin": false, 00:15:04.404 "nvme_io": false, 00:15:04.404 "nvme_io_md": false, 00:15:04.404 "write_zeroes": true, 00:15:04.404 "zcopy": false, 00:15:04.404 "get_zone_info": false, 00:15:04.404 "zone_management": false, 00:15:04.404 "zone_append": false, 00:15:04.404 "compare": false, 00:15:04.404 "compare_and_write": false, 00:15:04.404 "abort": false, 00:15:04.404 "seek_hole": false, 00:15:04.404 "seek_data": false, 00:15:04.404 "copy": false, 00:15:04.404 "nvme_iov_md": false 00:15:04.404 }, 00:15:04.404 "driver_specific": { 00:15:04.404 "raid": { 00:15:04.404 "uuid": "c3d75102-d6fa-4b4f-b6e7-5b7679db3809", 00:15:04.404 "strip_size_kb": 64, 00:15:04.404 "state": "online", 00:15:04.404 "raid_level": "raid5f", 00:15:04.404 "superblock": true, 00:15:04.404 "num_base_bdevs": 3, 00:15:04.404 "num_base_bdevs_discovered": 3, 00:15:04.404 "num_base_bdevs_operational": 3, 00:15:04.404 "base_bdevs_list": [ 00:15:04.404 { 00:15:04.404 "name": "pt1", 00:15:04.404 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:04.404 "is_configured": true, 00:15:04.404 "data_offset": 2048, 00:15:04.404 "data_size": 63488 00:15:04.404 }, 00:15:04.404 { 00:15:04.404 "name": "pt2", 00:15:04.404 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:04.404 "is_configured": true, 00:15:04.404 "data_offset": 2048, 00:15:04.404 "data_size": 63488 00:15:04.404 }, 00:15:04.404 { 00:15:04.404 "name": "pt3", 00:15:04.404 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:04.404 "is_configured": true, 00:15:04.404 "data_offset": 2048, 00:15:04.404 "data_size": 63488 00:15:04.404 } 00:15:04.404 ] 00:15:04.404 } 00:15:04.404 } 00:15:04.404 }' 00:15:04.404 15:22:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:04.404 15:22:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:04.404 pt2 00:15:04.404 pt3' 00:15:04.404 15:22:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:04.664 15:22:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:04.664 15:22:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:04.664 15:22:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:04.664 15:22:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:04.664 15:22:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.664 15:22:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.664 15:22:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.664 15:22:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:04.664 15:22:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:04.664 15:22:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:04.664 15:22:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:04.664 15:22:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.664 15:22:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.664 15:22:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:04.664 15:22:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.664 15:22:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:04.664 15:22:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:04.664 15:22:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:04.664 15:22:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:04.664 15:22:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:15:04.664 15:22:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.664 15:22:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.664 15:22:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.664 15:22:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:04.664 15:22:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:04.664 15:22:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:15:04.664 15:22:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:04.664 15:22:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.664 15:22:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.664 [2024-11-20 15:22:51.040542] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:04.664 15:22:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.664 15:22:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' c3d75102-d6fa-4b4f-b6e7-5b7679db3809 '!=' c3d75102-d6fa-4b4f-b6e7-5b7679db3809 ']' 00:15:04.664 15:22:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:15:04.664 15:22:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:04.664 15:22:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:15:04.664 15:22:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:15:04.664 15:22:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.664 15:22:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.664 [2024-11-20 15:22:51.088371] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:15:04.664 15:22:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.664 15:22:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:04.664 15:22:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:04.664 15:22:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:04.664 15:22:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:04.664 15:22:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:04.664 15:22:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:04.664 15:22:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:04.664 15:22:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:04.664 15:22:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:04.664 15:22:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:04.664 15:22:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:04.664 15:22:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:04.664 15:22:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.664 15:22:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.664 15:22:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.923 15:22:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:04.923 "name": "raid_bdev1", 00:15:04.923 "uuid": "c3d75102-d6fa-4b4f-b6e7-5b7679db3809", 00:15:04.923 "strip_size_kb": 64, 00:15:04.923 "state": "online", 00:15:04.923 "raid_level": "raid5f", 00:15:04.923 "superblock": true, 00:15:04.923 "num_base_bdevs": 3, 00:15:04.923 "num_base_bdevs_discovered": 2, 00:15:04.923 "num_base_bdevs_operational": 2, 00:15:04.923 "base_bdevs_list": [ 00:15:04.923 { 00:15:04.923 "name": null, 00:15:04.923 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:04.923 "is_configured": false, 00:15:04.923 "data_offset": 0, 00:15:04.923 "data_size": 63488 00:15:04.923 }, 00:15:04.923 { 00:15:04.923 "name": "pt2", 00:15:04.923 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:04.923 "is_configured": true, 00:15:04.923 "data_offset": 2048, 00:15:04.923 "data_size": 63488 00:15:04.923 }, 00:15:04.923 { 00:15:04.923 "name": "pt3", 00:15:04.923 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:04.923 "is_configured": true, 00:15:04.923 "data_offset": 2048, 00:15:04.923 "data_size": 63488 00:15:04.923 } 00:15:04.923 ] 00:15:04.923 }' 00:15:04.923 15:22:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:04.923 15:22:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.183 15:22:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:05.183 15:22:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.183 15:22:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.183 [2024-11-20 15:22:51.519803] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:05.183 [2024-11-20 15:22:51.519838] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:05.183 [2024-11-20 15:22:51.519915] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:05.183 [2024-11-20 15:22:51.519974] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:05.183 [2024-11-20 15:22:51.519991] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:15:05.183 15:22:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.183 15:22:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:05.183 15:22:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.183 15:22:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:15:05.183 15:22:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.183 15:22:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.183 15:22:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:15:05.183 15:22:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:15:05.183 15:22:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:15:05.183 15:22:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:05.183 15:22:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:15:05.183 15:22:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.183 15:22:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.183 15:22:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.183 15:22:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:15:05.183 15:22:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:05.183 15:22:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:15:05.184 15:22:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.184 15:22:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.184 15:22:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.184 15:22:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:15:05.184 15:22:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:05.184 15:22:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:15:05.184 15:22:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:15:05.184 15:22:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:05.184 15:22:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.184 15:22:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.184 [2024-11-20 15:22:51.603683] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:05.184 [2024-11-20 15:22:51.603762] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:05.184 [2024-11-20 15:22:51.603782] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:15:05.184 [2024-11-20 15:22:51.603796] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:05.184 [2024-11-20 15:22:51.606482] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:05.184 [2024-11-20 15:22:51.606649] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:05.184 [2024-11-20 15:22:51.606927] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:05.184 [2024-11-20 15:22:51.607073] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:05.184 pt2 00:15:05.184 15:22:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.184 15:22:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:15:05.184 15:22:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:05.184 15:22:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:05.184 15:22:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:05.184 15:22:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:05.184 15:22:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:05.184 15:22:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:05.184 15:22:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:05.184 15:22:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:05.184 15:22:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:05.184 15:22:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:05.184 15:22:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:05.184 15:22:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.184 15:22:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.184 15:22:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.184 15:22:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:05.184 "name": "raid_bdev1", 00:15:05.184 "uuid": "c3d75102-d6fa-4b4f-b6e7-5b7679db3809", 00:15:05.184 "strip_size_kb": 64, 00:15:05.184 "state": "configuring", 00:15:05.184 "raid_level": "raid5f", 00:15:05.184 "superblock": true, 00:15:05.184 "num_base_bdevs": 3, 00:15:05.184 "num_base_bdevs_discovered": 1, 00:15:05.184 "num_base_bdevs_operational": 2, 00:15:05.184 "base_bdevs_list": [ 00:15:05.184 { 00:15:05.184 "name": null, 00:15:05.184 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:05.184 "is_configured": false, 00:15:05.184 "data_offset": 2048, 00:15:05.184 "data_size": 63488 00:15:05.184 }, 00:15:05.184 { 00:15:05.184 "name": "pt2", 00:15:05.184 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:05.184 "is_configured": true, 00:15:05.184 "data_offset": 2048, 00:15:05.184 "data_size": 63488 00:15:05.184 }, 00:15:05.184 { 00:15:05.184 "name": null, 00:15:05.184 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:05.184 "is_configured": false, 00:15:05.184 "data_offset": 2048, 00:15:05.184 "data_size": 63488 00:15:05.184 } 00:15:05.184 ] 00:15:05.184 }' 00:15:05.184 15:22:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:05.184 15:22:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.752 15:22:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:15:05.752 15:22:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:15:05.752 15:22:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:15:05.752 15:22:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:05.752 15:22:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.752 15:22:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.752 [2024-11-20 15:22:52.047033] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:05.752 [2024-11-20 15:22:52.047280] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:05.752 [2024-11-20 15:22:52.047313] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:15:05.752 [2024-11-20 15:22:52.047329] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:05.752 [2024-11-20 15:22:52.047838] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:05.752 [2024-11-20 15:22:52.047863] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:05.752 [2024-11-20 15:22:52.047951] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:15:05.752 [2024-11-20 15:22:52.047980] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:05.752 [2024-11-20 15:22:52.048109] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:15:05.752 [2024-11-20 15:22:52.048122] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:05.752 [2024-11-20 15:22:52.048391] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:15:05.752 [2024-11-20 15:22:52.053920] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:15:05.752 [2024-11-20 15:22:52.054069] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:15:05.752 [2024-11-20 15:22:52.054492] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:05.752 pt3 00:15:05.752 15:22:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.752 15:22:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:05.752 15:22:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:05.752 15:22:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:05.752 15:22:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:05.752 15:22:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:05.752 15:22:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:05.752 15:22:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:05.752 15:22:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:05.752 15:22:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:05.752 15:22:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:05.752 15:22:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:05.752 15:22:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.752 15:22:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.752 15:22:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:05.752 15:22:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.752 15:22:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:05.752 "name": "raid_bdev1", 00:15:05.752 "uuid": "c3d75102-d6fa-4b4f-b6e7-5b7679db3809", 00:15:05.752 "strip_size_kb": 64, 00:15:05.752 "state": "online", 00:15:05.752 "raid_level": "raid5f", 00:15:05.752 "superblock": true, 00:15:05.752 "num_base_bdevs": 3, 00:15:05.752 "num_base_bdevs_discovered": 2, 00:15:05.752 "num_base_bdevs_operational": 2, 00:15:05.752 "base_bdevs_list": [ 00:15:05.752 { 00:15:05.752 "name": null, 00:15:05.752 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:05.752 "is_configured": false, 00:15:05.752 "data_offset": 2048, 00:15:05.752 "data_size": 63488 00:15:05.752 }, 00:15:05.752 { 00:15:05.752 "name": "pt2", 00:15:05.752 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:05.752 "is_configured": true, 00:15:05.752 "data_offset": 2048, 00:15:05.752 "data_size": 63488 00:15:05.752 }, 00:15:05.752 { 00:15:05.752 "name": "pt3", 00:15:05.752 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:05.752 "is_configured": true, 00:15:05.752 "data_offset": 2048, 00:15:05.752 "data_size": 63488 00:15:05.752 } 00:15:05.752 ] 00:15:05.752 }' 00:15:05.752 15:22:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:05.752 15:22:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.012 15:22:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:06.012 15:22:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.012 15:22:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.272 [2024-11-20 15:22:52.493239] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:06.272 [2024-11-20 15:22:52.493275] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:06.272 [2024-11-20 15:22:52.493354] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:06.272 [2024-11-20 15:22:52.493422] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:06.272 [2024-11-20 15:22:52.493434] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:15:06.272 15:22:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.272 15:22:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:15:06.272 15:22:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:06.272 15:22:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.272 15:22:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.272 15:22:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.272 15:22:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:15:06.272 15:22:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:15:06.272 15:22:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:15:06.272 15:22:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:15:06.272 15:22:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:15:06.272 15:22:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.272 15:22:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.272 15:22:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.272 15:22:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:06.272 15:22:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.272 15:22:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.272 [2024-11-20 15:22:52.561173] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:06.272 [2024-11-20 15:22:52.561252] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:06.272 [2024-11-20 15:22:52.561278] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:15:06.272 [2024-11-20 15:22:52.561291] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:06.272 [2024-11-20 15:22:52.564155] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:06.272 [2024-11-20 15:22:52.564201] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:06.272 [2024-11-20 15:22:52.564301] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:06.272 [2024-11-20 15:22:52.564361] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:06.272 [2024-11-20 15:22:52.564517] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:15:06.272 [2024-11-20 15:22:52.564532] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:06.272 [2024-11-20 15:22:52.564562] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:15:06.272 [2024-11-20 15:22:52.564626] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:06.272 pt1 00:15:06.272 15:22:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.272 15:22:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:15:06.272 15:22:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:15:06.272 15:22:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:06.272 15:22:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:06.272 15:22:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:06.272 15:22:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:06.272 15:22:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:06.272 15:22:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:06.272 15:22:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:06.272 15:22:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:06.272 15:22:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:06.272 15:22:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:06.272 15:22:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:06.272 15:22:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.272 15:22:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.272 15:22:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.272 15:22:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:06.272 "name": "raid_bdev1", 00:15:06.272 "uuid": "c3d75102-d6fa-4b4f-b6e7-5b7679db3809", 00:15:06.272 "strip_size_kb": 64, 00:15:06.272 "state": "configuring", 00:15:06.272 "raid_level": "raid5f", 00:15:06.272 "superblock": true, 00:15:06.272 "num_base_bdevs": 3, 00:15:06.272 "num_base_bdevs_discovered": 1, 00:15:06.272 "num_base_bdevs_operational": 2, 00:15:06.272 "base_bdevs_list": [ 00:15:06.272 { 00:15:06.272 "name": null, 00:15:06.272 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:06.272 "is_configured": false, 00:15:06.272 "data_offset": 2048, 00:15:06.272 "data_size": 63488 00:15:06.272 }, 00:15:06.272 { 00:15:06.272 "name": "pt2", 00:15:06.272 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:06.272 "is_configured": true, 00:15:06.272 "data_offset": 2048, 00:15:06.272 "data_size": 63488 00:15:06.272 }, 00:15:06.272 { 00:15:06.272 "name": null, 00:15:06.272 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:06.272 "is_configured": false, 00:15:06.272 "data_offset": 2048, 00:15:06.272 "data_size": 63488 00:15:06.272 } 00:15:06.272 ] 00:15:06.272 }' 00:15:06.272 15:22:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:06.272 15:22:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.839 15:22:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:15:06.839 15:22:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:15:06.839 15:22:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.839 15:22:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.839 15:22:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.839 15:22:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:15:06.839 15:22:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:06.839 15:22:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.839 15:22:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.839 [2024-11-20 15:22:53.072467] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:06.839 [2024-11-20 15:22:53.072732] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:06.839 [2024-11-20 15:22:53.072770] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:15:06.839 [2024-11-20 15:22:53.072785] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:06.839 [2024-11-20 15:22:53.073315] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:06.839 [2024-11-20 15:22:53.073343] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:06.839 [2024-11-20 15:22:53.073441] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:15:06.839 [2024-11-20 15:22:53.073473] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:06.839 [2024-11-20 15:22:53.073610] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:15:06.839 [2024-11-20 15:22:53.073621] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:06.839 [2024-11-20 15:22:53.073929] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:15:06.839 [2024-11-20 15:22:53.080339] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:15:06.839 pt3 00:15:06.839 [2024-11-20 15:22:53.080524] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:15:06.839 [2024-11-20 15:22:53.080857] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:06.839 15:22:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.839 15:22:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:06.839 15:22:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:06.839 15:22:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:06.839 15:22:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:06.839 15:22:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:06.839 15:22:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:06.839 15:22:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:06.840 15:22:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:06.840 15:22:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:06.840 15:22:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:06.840 15:22:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:06.840 15:22:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.840 15:22:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:06.840 15:22:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.840 15:22:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.840 15:22:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:06.840 "name": "raid_bdev1", 00:15:06.840 "uuid": "c3d75102-d6fa-4b4f-b6e7-5b7679db3809", 00:15:06.840 "strip_size_kb": 64, 00:15:06.840 "state": "online", 00:15:06.840 "raid_level": "raid5f", 00:15:06.840 "superblock": true, 00:15:06.840 "num_base_bdevs": 3, 00:15:06.840 "num_base_bdevs_discovered": 2, 00:15:06.840 "num_base_bdevs_operational": 2, 00:15:06.840 "base_bdevs_list": [ 00:15:06.840 { 00:15:06.840 "name": null, 00:15:06.840 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:06.840 "is_configured": false, 00:15:06.840 "data_offset": 2048, 00:15:06.840 "data_size": 63488 00:15:06.840 }, 00:15:06.840 { 00:15:06.840 "name": "pt2", 00:15:06.840 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:06.840 "is_configured": true, 00:15:06.840 "data_offset": 2048, 00:15:06.840 "data_size": 63488 00:15:06.840 }, 00:15:06.840 { 00:15:06.840 "name": "pt3", 00:15:06.840 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:06.840 "is_configured": true, 00:15:06.840 "data_offset": 2048, 00:15:06.840 "data_size": 63488 00:15:06.840 } 00:15:06.840 ] 00:15:06.840 }' 00:15:06.840 15:22:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:06.840 15:22:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.098 15:22:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:15:07.098 15:22:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:15:07.098 15:22:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.098 15:22:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.098 15:22:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.098 15:22:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:15:07.357 15:22:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:07.357 15:22:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.357 15:22:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.357 15:22:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:15:07.357 [2024-11-20 15:22:53.587393] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:07.357 15:22:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.357 15:22:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' c3d75102-d6fa-4b4f-b6e7-5b7679db3809 '!=' c3d75102-d6fa-4b4f-b6e7-5b7679db3809 ']' 00:15:07.357 15:22:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 80946 00:15:07.357 15:22:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 80946 ']' 00:15:07.357 15:22:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # kill -0 80946 00:15:07.357 15:22:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # uname 00:15:07.357 15:22:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:07.357 15:22:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80946 00:15:07.357 killing process with pid 80946 00:15:07.357 15:22:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:07.357 15:22:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:07.357 15:22:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80946' 00:15:07.357 15:22:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@973 -- # kill 80946 00:15:07.357 [2024-11-20 15:22:53.658266] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:07.357 [2024-11-20 15:22:53.658372] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:07.357 [2024-11-20 15:22:53.658438] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:07.357 [2024-11-20 15:22:53.658454] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:15:07.357 15:22:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@978 -- # wait 80946 00:15:07.659 [2024-11-20 15:22:53.968397] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:09.045 15:22:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:15:09.045 00:15:09.045 real 0m7.777s 00:15:09.045 user 0m12.085s 00:15:09.045 sys 0m1.653s 00:15:09.045 15:22:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:09.045 15:22:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.045 ************************************ 00:15:09.045 END TEST raid5f_superblock_test 00:15:09.045 ************************************ 00:15:09.045 15:22:55 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:15:09.045 15:22:55 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 3 false false true 00:15:09.045 15:22:55 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:15:09.045 15:22:55 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:09.045 15:22:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:09.045 ************************************ 00:15:09.045 START TEST raid5f_rebuild_test 00:15:09.045 ************************************ 00:15:09.045 15:22:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 3 false false true 00:15:09.045 15:22:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:15:09.045 15:22:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:15:09.045 15:22:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:15:09.045 15:22:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:15:09.045 15:22:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:09.045 15:22:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:09.045 15:22:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:09.045 15:22:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:09.045 15:22:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:09.045 15:22:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:09.045 15:22:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:09.045 15:22:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:09.045 15:22:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:09.045 15:22:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:15:09.045 15:22:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:09.045 15:22:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:09.045 15:22:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:15:09.045 15:22:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:09.045 15:22:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:09.045 15:22:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:09.045 15:22:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:09.045 15:22:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:09.045 15:22:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:09.045 15:22:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:15:09.045 15:22:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:15:09.045 15:22:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:15:09.045 15:22:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:15:09.045 15:22:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:15:09.045 15:22:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=81390 00:15:09.045 15:22:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:09.045 15:22:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 81390 00:15:09.045 15:22:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 81390 ']' 00:15:09.045 15:22:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:09.045 15:22:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:09.045 15:22:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:09.045 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:09.045 15:22:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:09.045 15:22:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.045 [2024-11-20 15:22:55.328518] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:15:09.046 [2024-11-20 15:22:55.329015] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:15:09.046 Zero copy mechanism will not be used. 00:15:09.046 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81390 ] 00:15:09.305 [2024-11-20 15:22:55.529093] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:09.305 [2024-11-20 15:22:55.651492] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:09.564 [2024-11-20 15:22:55.867420] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:09.564 [2024-11-20 15:22:55.867730] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:09.824 15:22:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:09.824 15:22:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:15:09.824 15:22:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:09.824 15:22:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:09.824 15:22:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.824 15:22:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.824 BaseBdev1_malloc 00:15:09.824 15:22:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.824 15:22:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:09.824 15:22:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.824 15:22:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.824 [2024-11-20 15:22:56.260986] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:09.824 [2024-11-20 15:22:56.261200] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:09.824 [2024-11-20 15:22:56.261260] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:09.824 [2024-11-20 15:22:56.261347] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:09.824 [2024-11-20 15:22:56.263890] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:09.824 [2024-11-20 15:22:56.264055] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:09.824 BaseBdev1 00:15:09.824 15:22:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.824 15:22:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:09.824 15:22:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:09.824 15:22:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.824 15:22:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.083 BaseBdev2_malloc 00:15:10.083 15:22:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.083 15:22:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:10.083 15:22:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.083 15:22:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.083 [2024-11-20 15:22:56.319378] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:10.083 [2024-11-20 15:22:56.319651] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:10.083 [2024-11-20 15:22:56.319732] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:10.083 [2024-11-20 15:22:56.319839] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:10.083 [2024-11-20 15:22:56.322298] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:10.083 [2024-11-20 15:22:56.322456] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:10.083 BaseBdev2 00:15:10.083 15:22:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.083 15:22:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:10.083 15:22:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:10.083 15:22:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.083 15:22:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.083 BaseBdev3_malloc 00:15:10.083 15:22:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.083 15:22:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:15:10.083 15:22:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.083 15:22:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.083 [2024-11-20 15:22:56.389075] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:15:10.083 [2024-11-20 15:22:56.389153] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:10.083 [2024-11-20 15:22:56.389179] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:10.083 [2024-11-20 15:22:56.389193] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:10.083 [2024-11-20 15:22:56.391692] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:10.083 [2024-11-20 15:22:56.391738] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:10.083 BaseBdev3 00:15:10.083 15:22:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.083 15:22:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:15:10.083 15:22:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.083 15:22:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.083 spare_malloc 00:15:10.083 15:22:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.083 15:22:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:10.083 15:22:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.083 15:22:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.083 spare_delay 00:15:10.083 15:22:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.083 15:22:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:10.083 15:22:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.083 15:22:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.083 [2024-11-20 15:22:56.457855] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:10.083 [2024-11-20 15:22:56.458075] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:10.083 [2024-11-20 15:22:56.458134] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:15:10.083 [2024-11-20 15:22:56.458212] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:10.083 [2024-11-20 15:22:56.460700] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:10.083 [2024-11-20 15:22:56.460849] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:10.083 spare 00:15:10.083 15:22:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.083 15:22:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:15:10.083 15:22:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.083 15:22:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.083 [2024-11-20 15:22:56.469922] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:10.083 [2024-11-20 15:22:56.472040] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:10.083 [2024-11-20 15:22:56.472109] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:10.083 [2024-11-20 15:22:56.472204] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:10.083 [2024-11-20 15:22:56.472216] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:15:10.083 [2024-11-20 15:22:56.472516] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:10.083 [2024-11-20 15:22:56.478305] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:10.083 [2024-11-20 15:22:56.478452] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:10.083 [2024-11-20 15:22:56.478810] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:10.083 15:22:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.083 15:22:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:10.083 15:22:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:10.083 15:22:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:10.083 15:22:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:10.083 15:22:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:10.083 15:22:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:10.083 15:22:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:10.083 15:22:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:10.083 15:22:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:10.084 15:22:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:10.084 15:22:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:10.084 15:22:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:10.084 15:22:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.084 15:22:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.084 15:22:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.084 15:22:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:10.084 "name": "raid_bdev1", 00:15:10.084 "uuid": "0e67185b-ad50-468a-993c-c06f55d23fd8", 00:15:10.084 "strip_size_kb": 64, 00:15:10.084 "state": "online", 00:15:10.084 "raid_level": "raid5f", 00:15:10.084 "superblock": false, 00:15:10.084 "num_base_bdevs": 3, 00:15:10.084 "num_base_bdevs_discovered": 3, 00:15:10.084 "num_base_bdevs_operational": 3, 00:15:10.084 "base_bdevs_list": [ 00:15:10.084 { 00:15:10.084 "name": "BaseBdev1", 00:15:10.084 "uuid": "af885160-dcb8-5039-afc8-80636c858a72", 00:15:10.084 "is_configured": true, 00:15:10.084 "data_offset": 0, 00:15:10.084 "data_size": 65536 00:15:10.084 }, 00:15:10.084 { 00:15:10.084 "name": "BaseBdev2", 00:15:10.084 "uuid": "1b2042a3-1493-59e8-9df6-4e5cc114173c", 00:15:10.084 "is_configured": true, 00:15:10.084 "data_offset": 0, 00:15:10.084 "data_size": 65536 00:15:10.084 }, 00:15:10.084 { 00:15:10.084 "name": "BaseBdev3", 00:15:10.084 "uuid": "da9e24d5-013b-564b-9f75-a6aeab0180ce", 00:15:10.084 "is_configured": true, 00:15:10.084 "data_offset": 0, 00:15:10.084 "data_size": 65536 00:15:10.084 } 00:15:10.084 ] 00:15:10.084 }' 00:15:10.084 15:22:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:10.084 15:22:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.651 15:22:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:10.651 15:22:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:10.651 15:22:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.651 15:22:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.651 [2024-11-20 15:22:56.937013] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:10.651 15:22:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.651 15:22:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=131072 00:15:10.651 15:22:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:10.651 15:22:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:10.651 15:22:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.651 15:22:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.651 15:22:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.651 15:22:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:15:10.651 15:22:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:15:10.651 15:22:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:15:10.651 15:22:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:15:10.651 15:22:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:15:10.651 15:22:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:10.651 15:22:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:15:10.651 15:22:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:10.651 15:22:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:10.651 15:22:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:10.651 15:22:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:15:10.651 15:22:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:10.651 15:22:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:10.651 15:22:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:15:10.910 [2024-11-20 15:22:57.200490] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:15:10.910 /dev/nbd0 00:15:10.910 15:22:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:10.910 15:22:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:10.910 15:22:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:10.910 15:22:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:15:10.910 15:22:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:10.910 15:22:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:10.910 15:22:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:10.910 15:22:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:15:10.910 15:22:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:10.910 15:22:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:10.910 15:22:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:10.910 1+0 records in 00:15:10.910 1+0 records out 00:15:10.910 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000424097 s, 9.7 MB/s 00:15:10.910 15:22:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:10.910 15:22:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:15:10.910 15:22:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:10.910 15:22:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:10.910 15:22:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:15:10.910 15:22:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:10.910 15:22:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:10.910 15:22:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:15:10.910 15:22:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:15:10.910 15:22:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 128 00:15:10.910 15:22:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=512 oflag=direct 00:15:11.477 512+0 records in 00:15:11.477 512+0 records out 00:15:11.477 67108864 bytes (67 MB, 64 MiB) copied, 0.436424 s, 154 MB/s 00:15:11.477 15:22:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:11.477 15:22:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:11.477 15:22:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:11.477 15:22:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:11.477 15:22:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:15:11.477 15:22:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:11.478 15:22:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:11.478 15:22:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:11.478 [2024-11-20 15:22:57.949257] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:11.478 15:22:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:11.478 15:22:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:11.478 15:22:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:11.478 15:22:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:11.478 15:22:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:11.736 15:22:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:15:11.736 15:22:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:15:11.736 15:22:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:11.736 15:22:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.736 15:22:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.736 [2024-11-20 15:22:57.965164] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:11.736 15:22:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.736 15:22:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:11.736 15:22:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:11.736 15:22:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:11.736 15:22:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:11.736 15:22:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:11.736 15:22:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:11.736 15:22:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:11.736 15:22:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:11.736 15:22:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:11.736 15:22:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:11.736 15:22:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:11.736 15:22:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:11.736 15:22:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.736 15:22:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.736 15:22:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.736 15:22:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:11.736 "name": "raid_bdev1", 00:15:11.736 "uuid": "0e67185b-ad50-468a-993c-c06f55d23fd8", 00:15:11.736 "strip_size_kb": 64, 00:15:11.736 "state": "online", 00:15:11.736 "raid_level": "raid5f", 00:15:11.736 "superblock": false, 00:15:11.736 "num_base_bdevs": 3, 00:15:11.736 "num_base_bdevs_discovered": 2, 00:15:11.736 "num_base_bdevs_operational": 2, 00:15:11.736 "base_bdevs_list": [ 00:15:11.736 { 00:15:11.736 "name": null, 00:15:11.736 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:11.736 "is_configured": false, 00:15:11.736 "data_offset": 0, 00:15:11.736 "data_size": 65536 00:15:11.736 }, 00:15:11.736 { 00:15:11.736 "name": "BaseBdev2", 00:15:11.736 "uuid": "1b2042a3-1493-59e8-9df6-4e5cc114173c", 00:15:11.736 "is_configured": true, 00:15:11.736 "data_offset": 0, 00:15:11.737 "data_size": 65536 00:15:11.737 }, 00:15:11.737 { 00:15:11.737 "name": "BaseBdev3", 00:15:11.737 "uuid": "da9e24d5-013b-564b-9f75-a6aeab0180ce", 00:15:11.737 "is_configured": true, 00:15:11.737 "data_offset": 0, 00:15:11.737 "data_size": 65536 00:15:11.737 } 00:15:11.737 ] 00:15:11.737 }' 00:15:11.737 15:22:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:11.737 15:22:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.995 15:22:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:11.995 15:22:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.995 15:22:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.995 [2024-11-20 15:22:58.400561] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:11.995 [2024-11-20 15:22:58.419215] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b680 00:15:11.995 15:22:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.995 15:22:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:11.995 [2024-11-20 15:22:58.428679] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:13.375 15:22:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:13.375 15:22:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:13.375 15:22:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:13.375 15:22:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:13.375 15:22:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:13.375 15:22:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:13.375 15:22:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.375 15:22:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.375 15:22:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:13.375 15:22:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.375 15:22:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:13.375 "name": "raid_bdev1", 00:15:13.375 "uuid": "0e67185b-ad50-468a-993c-c06f55d23fd8", 00:15:13.375 "strip_size_kb": 64, 00:15:13.375 "state": "online", 00:15:13.375 "raid_level": "raid5f", 00:15:13.375 "superblock": false, 00:15:13.375 "num_base_bdevs": 3, 00:15:13.375 "num_base_bdevs_discovered": 3, 00:15:13.375 "num_base_bdevs_operational": 3, 00:15:13.375 "process": { 00:15:13.375 "type": "rebuild", 00:15:13.375 "target": "spare", 00:15:13.375 "progress": { 00:15:13.375 "blocks": 18432, 00:15:13.375 "percent": 14 00:15:13.375 } 00:15:13.375 }, 00:15:13.375 "base_bdevs_list": [ 00:15:13.375 { 00:15:13.375 "name": "spare", 00:15:13.375 "uuid": "07f196f3-a1e6-576b-b18b-21fba0e319b0", 00:15:13.375 "is_configured": true, 00:15:13.375 "data_offset": 0, 00:15:13.375 "data_size": 65536 00:15:13.375 }, 00:15:13.375 { 00:15:13.375 "name": "BaseBdev2", 00:15:13.375 "uuid": "1b2042a3-1493-59e8-9df6-4e5cc114173c", 00:15:13.375 "is_configured": true, 00:15:13.375 "data_offset": 0, 00:15:13.375 "data_size": 65536 00:15:13.375 }, 00:15:13.375 { 00:15:13.375 "name": "BaseBdev3", 00:15:13.375 "uuid": "da9e24d5-013b-564b-9f75-a6aeab0180ce", 00:15:13.375 "is_configured": true, 00:15:13.375 "data_offset": 0, 00:15:13.375 "data_size": 65536 00:15:13.375 } 00:15:13.375 ] 00:15:13.375 }' 00:15:13.375 15:22:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:13.375 15:22:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:13.375 15:22:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:13.375 15:22:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:13.375 15:22:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:13.375 15:22:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.375 15:22:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.375 [2024-11-20 15:22:59.560449] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:13.375 [2024-11-20 15:22:59.639274] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:13.375 [2024-11-20 15:22:59.639374] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:13.375 [2024-11-20 15:22:59.639397] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:13.375 [2024-11-20 15:22:59.639407] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:13.375 15:22:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.375 15:22:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:13.375 15:22:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:13.375 15:22:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:13.375 15:22:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:13.375 15:22:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:13.375 15:22:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:13.375 15:22:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:13.376 15:22:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:13.376 15:22:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:13.376 15:22:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:13.376 15:22:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:13.376 15:22:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.376 15:22:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.376 15:22:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:13.376 15:22:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.376 15:22:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:13.376 "name": "raid_bdev1", 00:15:13.376 "uuid": "0e67185b-ad50-468a-993c-c06f55d23fd8", 00:15:13.376 "strip_size_kb": 64, 00:15:13.376 "state": "online", 00:15:13.376 "raid_level": "raid5f", 00:15:13.376 "superblock": false, 00:15:13.376 "num_base_bdevs": 3, 00:15:13.376 "num_base_bdevs_discovered": 2, 00:15:13.376 "num_base_bdevs_operational": 2, 00:15:13.376 "base_bdevs_list": [ 00:15:13.376 { 00:15:13.376 "name": null, 00:15:13.376 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:13.376 "is_configured": false, 00:15:13.376 "data_offset": 0, 00:15:13.376 "data_size": 65536 00:15:13.376 }, 00:15:13.376 { 00:15:13.376 "name": "BaseBdev2", 00:15:13.376 "uuid": "1b2042a3-1493-59e8-9df6-4e5cc114173c", 00:15:13.376 "is_configured": true, 00:15:13.376 "data_offset": 0, 00:15:13.376 "data_size": 65536 00:15:13.376 }, 00:15:13.376 { 00:15:13.376 "name": "BaseBdev3", 00:15:13.376 "uuid": "da9e24d5-013b-564b-9f75-a6aeab0180ce", 00:15:13.376 "is_configured": true, 00:15:13.376 "data_offset": 0, 00:15:13.376 "data_size": 65536 00:15:13.376 } 00:15:13.376 ] 00:15:13.376 }' 00:15:13.376 15:22:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:13.376 15:22:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.635 15:23:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:13.635 15:23:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:13.635 15:23:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:13.635 15:23:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:13.635 15:23:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:13.635 15:23:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:13.635 15:23:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.635 15:23:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:13.635 15:23:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.894 15:23:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.894 15:23:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:13.894 "name": "raid_bdev1", 00:15:13.894 "uuid": "0e67185b-ad50-468a-993c-c06f55d23fd8", 00:15:13.894 "strip_size_kb": 64, 00:15:13.894 "state": "online", 00:15:13.894 "raid_level": "raid5f", 00:15:13.894 "superblock": false, 00:15:13.894 "num_base_bdevs": 3, 00:15:13.894 "num_base_bdevs_discovered": 2, 00:15:13.894 "num_base_bdevs_operational": 2, 00:15:13.894 "base_bdevs_list": [ 00:15:13.894 { 00:15:13.894 "name": null, 00:15:13.894 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:13.894 "is_configured": false, 00:15:13.894 "data_offset": 0, 00:15:13.894 "data_size": 65536 00:15:13.894 }, 00:15:13.894 { 00:15:13.894 "name": "BaseBdev2", 00:15:13.894 "uuid": "1b2042a3-1493-59e8-9df6-4e5cc114173c", 00:15:13.894 "is_configured": true, 00:15:13.894 "data_offset": 0, 00:15:13.894 "data_size": 65536 00:15:13.894 }, 00:15:13.894 { 00:15:13.894 "name": "BaseBdev3", 00:15:13.894 "uuid": "da9e24d5-013b-564b-9f75-a6aeab0180ce", 00:15:13.894 "is_configured": true, 00:15:13.894 "data_offset": 0, 00:15:13.894 "data_size": 65536 00:15:13.894 } 00:15:13.894 ] 00:15:13.894 }' 00:15:13.894 15:23:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:13.894 15:23:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:13.894 15:23:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:13.894 15:23:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:13.894 15:23:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:13.894 15:23:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.894 15:23:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.894 [2024-11-20 15:23:00.233483] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:13.895 [2024-11-20 15:23:00.250489] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:15:13.895 15:23:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.895 15:23:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:13.895 [2024-11-20 15:23:00.259178] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:14.832 15:23:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:14.832 15:23:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:14.832 15:23:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:14.832 15:23:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:14.832 15:23:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:14.832 15:23:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:14.832 15:23:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:14.832 15:23:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.832 15:23:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.832 15:23:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.832 15:23:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:14.832 "name": "raid_bdev1", 00:15:14.832 "uuid": "0e67185b-ad50-468a-993c-c06f55d23fd8", 00:15:14.832 "strip_size_kb": 64, 00:15:14.832 "state": "online", 00:15:14.832 "raid_level": "raid5f", 00:15:14.832 "superblock": false, 00:15:14.832 "num_base_bdevs": 3, 00:15:14.832 "num_base_bdevs_discovered": 3, 00:15:14.832 "num_base_bdevs_operational": 3, 00:15:14.832 "process": { 00:15:14.832 "type": "rebuild", 00:15:14.832 "target": "spare", 00:15:14.832 "progress": { 00:15:14.832 "blocks": 20480, 00:15:14.832 "percent": 15 00:15:14.832 } 00:15:14.832 }, 00:15:14.832 "base_bdevs_list": [ 00:15:14.832 { 00:15:14.832 "name": "spare", 00:15:14.832 "uuid": "07f196f3-a1e6-576b-b18b-21fba0e319b0", 00:15:14.832 "is_configured": true, 00:15:14.832 "data_offset": 0, 00:15:14.832 "data_size": 65536 00:15:14.832 }, 00:15:14.832 { 00:15:14.832 "name": "BaseBdev2", 00:15:14.832 "uuid": "1b2042a3-1493-59e8-9df6-4e5cc114173c", 00:15:14.832 "is_configured": true, 00:15:14.832 "data_offset": 0, 00:15:14.832 "data_size": 65536 00:15:14.832 }, 00:15:14.832 { 00:15:14.832 "name": "BaseBdev3", 00:15:14.832 "uuid": "da9e24d5-013b-564b-9f75-a6aeab0180ce", 00:15:14.832 "is_configured": true, 00:15:14.832 "data_offset": 0, 00:15:14.832 "data_size": 65536 00:15:14.832 } 00:15:14.832 ] 00:15:14.832 }' 00:15:14.832 15:23:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:15.092 15:23:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:15.092 15:23:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:15.092 15:23:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:15.092 15:23:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:15:15.092 15:23:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:15:15.092 15:23:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:15:15.092 15:23:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=545 00:15:15.092 15:23:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:15.092 15:23:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:15.092 15:23:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:15.092 15:23:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:15.092 15:23:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:15.092 15:23:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:15.092 15:23:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:15.092 15:23:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.092 15:23:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.092 15:23:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:15.092 15:23:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.092 15:23:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:15.092 "name": "raid_bdev1", 00:15:15.092 "uuid": "0e67185b-ad50-468a-993c-c06f55d23fd8", 00:15:15.092 "strip_size_kb": 64, 00:15:15.092 "state": "online", 00:15:15.092 "raid_level": "raid5f", 00:15:15.092 "superblock": false, 00:15:15.092 "num_base_bdevs": 3, 00:15:15.092 "num_base_bdevs_discovered": 3, 00:15:15.092 "num_base_bdevs_operational": 3, 00:15:15.092 "process": { 00:15:15.092 "type": "rebuild", 00:15:15.092 "target": "spare", 00:15:15.092 "progress": { 00:15:15.092 "blocks": 22528, 00:15:15.092 "percent": 17 00:15:15.092 } 00:15:15.092 }, 00:15:15.092 "base_bdevs_list": [ 00:15:15.092 { 00:15:15.092 "name": "spare", 00:15:15.092 "uuid": "07f196f3-a1e6-576b-b18b-21fba0e319b0", 00:15:15.092 "is_configured": true, 00:15:15.092 "data_offset": 0, 00:15:15.092 "data_size": 65536 00:15:15.092 }, 00:15:15.092 { 00:15:15.092 "name": "BaseBdev2", 00:15:15.092 "uuid": "1b2042a3-1493-59e8-9df6-4e5cc114173c", 00:15:15.092 "is_configured": true, 00:15:15.092 "data_offset": 0, 00:15:15.092 "data_size": 65536 00:15:15.092 }, 00:15:15.092 { 00:15:15.092 "name": "BaseBdev3", 00:15:15.092 "uuid": "da9e24d5-013b-564b-9f75-a6aeab0180ce", 00:15:15.092 "is_configured": true, 00:15:15.092 "data_offset": 0, 00:15:15.092 "data_size": 65536 00:15:15.092 } 00:15:15.092 ] 00:15:15.092 }' 00:15:15.092 15:23:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:15.092 15:23:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:15.092 15:23:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:15.092 15:23:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:15.092 15:23:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:16.469 15:23:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:16.469 15:23:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:16.469 15:23:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:16.469 15:23:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:16.469 15:23:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:16.469 15:23:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:16.469 15:23:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:16.469 15:23:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:16.469 15:23:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.469 15:23:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.469 15:23:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.469 15:23:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:16.469 "name": "raid_bdev1", 00:15:16.469 "uuid": "0e67185b-ad50-468a-993c-c06f55d23fd8", 00:15:16.469 "strip_size_kb": 64, 00:15:16.469 "state": "online", 00:15:16.469 "raid_level": "raid5f", 00:15:16.469 "superblock": false, 00:15:16.469 "num_base_bdevs": 3, 00:15:16.469 "num_base_bdevs_discovered": 3, 00:15:16.469 "num_base_bdevs_operational": 3, 00:15:16.469 "process": { 00:15:16.469 "type": "rebuild", 00:15:16.469 "target": "spare", 00:15:16.469 "progress": { 00:15:16.469 "blocks": 45056, 00:15:16.469 "percent": 34 00:15:16.469 } 00:15:16.469 }, 00:15:16.469 "base_bdevs_list": [ 00:15:16.469 { 00:15:16.469 "name": "spare", 00:15:16.469 "uuid": "07f196f3-a1e6-576b-b18b-21fba0e319b0", 00:15:16.469 "is_configured": true, 00:15:16.469 "data_offset": 0, 00:15:16.469 "data_size": 65536 00:15:16.469 }, 00:15:16.469 { 00:15:16.469 "name": "BaseBdev2", 00:15:16.469 "uuid": "1b2042a3-1493-59e8-9df6-4e5cc114173c", 00:15:16.469 "is_configured": true, 00:15:16.470 "data_offset": 0, 00:15:16.470 "data_size": 65536 00:15:16.470 }, 00:15:16.470 { 00:15:16.470 "name": "BaseBdev3", 00:15:16.470 "uuid": "da9e24d5-013b-564b-9f75-a6aeab0180ce", 00:15:16.470 "is_configured": true, 00:15:16.470 "data_offset": 0, 00:15:16.470 "data_size": 65536 00:15:16.470 } 00:15:16.470 ] 00:15:16.470 }' 00:15:16.470 15:23:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:16.470 15:23:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:16.470 15:23:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:16.470 15:23:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:16.470 15:23:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:17.408 15:23:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:17.408 15:23:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:17.408 15:23:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:17.408 15:23:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:17.408 15:23:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:17.408 15:23:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:17.408 15:23:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:17.408 15:23:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:17.408 15:23:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.408 15:23:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.408 15:23:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.408 15:23:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:17.408 "name": "raid_bdev1", 00:15:17.408 "uuid": "0e67185b-ad50-468a-993c-c06f55d23fd8", 00:15:17.408 "strip_size_kb": 64, 00:15:17.408 "state": "online", 00:15:17.408 "raid_level": "raid5f", 00:15:17.408 "superblock": false, 00:15:17.408 "num_base_bdevs": 3, 00:15:17.408 "num_base_bdevs_discovered": 3, 00:15:17.408 "num_base_bdevs_operational": 3, 00:15:17.408 "process": { 00:15:17.408 "type": "rebuild", 00:15:17.408 "target": "spare", 00:15:17.408 "progress": { 00:15:17.408 "blocks": 69632, 00:15:17.408 "percent": 53 00:15:17.408 } 00:15:17.408 }, 00:15:17.408 "base_bdevs_list": [ 00:15:17.408 { 00:15:17.408 "name": "spare", 00:15:17.408 "uuid": "07f196f3-a1e6-576b-b18b-21fba0e319b0", 00:15:17.408 "is_configured": true, 00:15:17.408 "data_offset": 0, 00:15:17.408 "data_size": 65536 00:15:17.408 }, 00:15:17.408 { 00:15:17.408 "name": "BaseBdev2", 00:15:17.408 "uuid": "1b2042a3-1493-59e8-9df6-4e5cc114173c", 00:15:17.408 "is_configured": true, 00:15:17.408 "data_offset": 0, 00:15:17.408 "data_size": 65536 00:15:17.408 }, 00:15:17.408 { 00:15:17.408 "name": "BaseBdev3", 00:15:17.408 "uuid": "da9e24d5-013b-564b-9f75-a6aeab0180ce", 00:15:17.408 "is_configured": true, 00:15:17.408 "data_offset": 0, 00:15:17.408 "data_size": 65536 00:15:17.408 } 00:15:17.408 ] 00:15:17.408 }' 00:15:17.408 15:23:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:17.408 15:23:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:17.408 15:23:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:17.408 15:23:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:17.408 15:23:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:18.872 15:23:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:18.872 15:23:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:18.872 15:23:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:18.872 15:23:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:18.872 15:23:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:18.872 15:23:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:18.872 15:23:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:18.872 15:23:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.872 15:23:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:18.872 15:23:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.872 15:23:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.872 15:23:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:18.872 "name": "raid_bdev1", 00:15:18.872 "uuid": "0e67185b-ad50-468a-993c-c06f55d23fd8", 00:15:18.872 "strip_size_kb": 64, 00:15:18.872 "state": "online", 00:15:18.872 "raid_level": "raid5f", 00:15:18.872 "superblock": false, 00:15:18.872 "num_base_bdevs": 3, 00:15:18.872 "num_base_bdevs_discovered": 3, 00:15:18.872 "num_base_bdevs_operational": 3, 00:15:18.872 "process": { 00:15:18.872 "type": "rebuild", 00:15:18.872 "target": "spare", 00:15:18.872 "progress": { 00:15:18.872 "blocks": 92160, 00:15:18.872 "percent": 70 00:15:18.872 } 00:15:18.872 }, 00:15:18.872 "base_bdevs_list": [ 00:15:18.872 { 00:15:18.872 "name": "spare", 00:15:18.872 "uuid": "07f196f3-a1e6-576b-b18b-21fba0e319b0", 00:15:18.872 "is_configured": true, 00:15:18.872 "data_offset": 0, 00:15:18.872 "data_size": 65536 00:15:18.872 }, 00:15:18.872 { 00:15:18.872 "name": "BaseBdev2", 00:15:18.872 "uuid": "1b2042a3-1493-59e8-9df6-4e5cc114173c", 00:15:18.872 "is_configured": true, 00:15:18.872 "data_offset": 0, 00:15:18.872 "data_size": 65536 00:15:18.872 }, 00:15:18.872 { 00:15:18.872 "name": "BaseBdev3", 00:15:18.872 "uuid": "da9e24d5-013b-564b-9f75-a6aeab0180ce", 00:15:18.872 "is_configured": true, 00:15:18.872 "data_offset": 0, 00:15:18.872 "data_size": 65536 00:15:18.872 } 00:15:18.872 ] 00:15:18.872 }' 00:15:18.872 15:23:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:18.872 15:23:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:18.872 15:23:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:18.872 15:23:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:18.872 15:23:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:19.809 15:23:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:19.809 15:23:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:19.809 15:23:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:19.809 15:23:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:19.809 15:23:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:19.809 15:23:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:19.809 15:23:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:19.809 15:23:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:19.809 15:23:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.809 15:23:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.809 15:23:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.809 15:23:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:19.809 "name": "raid_bdev1", 00:15:19.809 "uuid": "0e67185b-ad50-468a-993c-c06f55d23fd8", 00:15:19.809 "strip_size_kb": 64, 00:15:19.809 "state": "online", 00:15:19.809 "raid_level": "raid5f", 00:15:19.809 "superblock": false, 00:15:19.809 "num_base_bdevs": 3, 00:15:19.809 "num_base_bdevs_discovered": 3, 00:15:19.809 "num_base_bdevs_operational": 3, 00:15:19.809 "process": { 00:15:19.809 "type": "rebuild", 00:15:19.809 "target": "spare", 00:15:19.809 "progress": { 00:15:19.809 "blocks": 114688, 00:15:19.809 "percent": 87 00:15:19.809 } 00:15:19.809 }, 00:15:19.809 "base_bdevs_list": [ 00:15:19.809 { 00:15:19.809 "name": "spare", 00:15:19.809 "uuid": "07f196f3-a1e6-576b-b18b-21fba0e319b0", 00:15:19.809 "is_configured": true, 00:15:19.809 "data_offset": 0, 00:15:19.809 "data_size": 65536 00:15:19.809 }, 00:15:19.809 { 00:15:19.809 "name": "BaseBdev2", 00:15:19.809 "uuid": "1b2042a3-1493-59e8-9df6-4e5cc114173c", 00:15:19.809 "is_configured": true, 00:15:19.809 "data_offset": 0, 00:15:19.809 "data_size": 65536 00:15:19.809 }, 00:15:19.809 { 00:15:19.809 "name": "BaseBdev3", 00:15:19.809 "uuid": "da9e24d5-013b-564b-9f75-a6aeab0180ce", 00:15:19.809 "is_configured": true, 00:15:19.809 "data_offset": 0, 00:15:19.809 "data_size": 65536 00:15:19.809 } 00:15:19.809 ] 00:15:19.809 }' 00:15:19.809 15:23:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:19.809 15:23:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:19.809 15:23:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:19.809 15:23:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:19.809 15:23:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:20.378 [2024-11-20 15:23:06.715179] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:20.378 [2024-11-20 15:23:06.715289] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:20.378 [2024-11-20 15:23:06.715339] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:20.945 15:23:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:20.945 15:23:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:20.945 15:23:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:20.945 15:23:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:20.945 15:23:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:20.945 15:23:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:20.945 15:23:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:20.945 15:23:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:20.945 15:23:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.945 15:23:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.945 15:23:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.945 15:23:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:20.945 "name": "raid_bdev1", 00:15:20.945 "uuid": "0e67185b-ad50-468a-993c-c06f55d23fd8", 00:15:20.945 "strip_size_kb": 64, 00:15:20.945 "state": "online", 00:15:20.945 "raid_level": "raid5f", 00:15:20.945 "superblock": false, 00:15:20.945 "num_base_bdevs": 3, 00:15:20.945 "num_base_bdevs_discovered": 3, 00:15:20.945 "num_base_bdevs_operational": 3, 00:15:20.945 "base_bdevs_list": [ 00:15:20.945 { 00:15:20.945 "name": "spare", 00:15:20.945 "uuid": "07f196f3-a1e6-576b-b18b-21fba0e319b0", 00:15:20.945 "is_configured": true, 00:15:20.945 "data_offset": 0, 00:15:20.945 "data_size": 65536 00:15:20.945 }, 00:15:20.945 { 00:15:20.945 "name": "BaseBdev2", 00:15:20.945 "uuid": "1b2042a3-1493-59e8-9df6-4e5cc114173c", 00:15:20.945 "is_configured": true, 00:15:20.945 "data_offset": 0, 00:15:20.945 "data_size": 65536 00:15:20.945 }, 00:15:20.945 { 00:15:20.945 "name": "BaseBdev3", 00:15:20.945 "uuid": "da9e24d5-013b-564b-9f75-a6aeab0180ce", 00:15:20.945 "is_configured": true, 00:15:20.945 "data_offset": 0, 00:15:20.945 "data_size": 65536 00:15:20.945 } 00:15:20.945 ] 00:15:20.945 }' 00:15:20.945 15:23:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:20.945 15:23:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:20.945 15:23:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:20.945 15:23:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:20.945 15:23:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:15:20.945 15:23:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:20.945 15:23:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:20.945 15:23:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:20.945 15:23:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:20.946 15:23:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:20.946 15:23:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:20.946 15:23:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.946 15:23:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.946 15:23:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:20.946 15:23:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.946 15:23:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:20.946 "name": "raid_bdev1", 00:15:20.946 "uuid": "0e67185b-ad50-468a-993c-c06f55d23fd8", 00:15:20.946 "strip_size_kb": 64, 00:15:20.946 "state": "online", 00:15:20.946 "raid_level": "raid5f", 00:15:20.946 "superblock": false, 00:15:20.946 "num_base_bdevs": 3, 00:15:20.946 "num_base_bdevs_discovered": 3, 00:15:20.946 "num_base_bdevs_operational": 3, 00:15:20.946 "base_bdevs_list": [ 00:15:20.946 { 00:15:20.946 "name": "spare", 00:15:20.946 "uuid": "07f196f3-a1e6-576b-b18b-21fba0e319b0", 00:15:20.946 "is_configured": true, 00:15:20.946 "data_offset": 0, 00:15:20.946 "data_size": 65536 00:15:20.946 }, 00:15:20.946 { 00:15:20.946 "name": "BaseBdev2", 00:15:20.946 "uuid": "1b2042a3-1493-59e8-9df6-4e5cc114173c", 00:15:20.946 "is_configured": true, 00:15:20.946 "data_offset": 0, 00:15:20.946 "data_size": 65536 00:15:20.946 }, 00:15:20.946 { 00:15:20.946 "name": "BaseBdev3", 00:15:20.946 "uuid": "da9e24d5-013b-564b-9f75-a6aeab0180ce", 00:15:20.946 "is_configured": true, 00:15:20.946 "data_offset": 0, 00:15:20.946 "data_size": 65536 00:15:20.946 } 00:15:20.946 ] 00:15:20.946 }' 00:15:20.946 15:23:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:20.946 15:23:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:20.946 15:23:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:20.946 15:23:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:20.946 15:23:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:20.946 15:23:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:20.946 15:23:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:20.946 15:23:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:20.946 15:23:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:20.946 15:23:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:20.946 15:23:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:20.946 15:23:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:20.946 15:23:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:20.946 15:23:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:20.946 15:23:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:20.946 15:23:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:20.946 15:23:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.946 15:23:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.946 15:23:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.946 15:23:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:20.946 "name": "raid_bdev1", 00:15:20.946 "uuid": "0e67185b-ad50-468a-993c-c06f55d23fd8", 00:15:20.946 "strip_size_kb": 64, 00:15:20.946 "state": "online", 00:15:20.946 "raid_level": "raid5f", 00:15:20.946 "superblock": false, 00:15:20.946 "num_base_bdevs": 3, 00:15:20.946 "num_base_bdevs_discovered": 3, 00:15:20.946 "num_base_bdevs_operational": 3, 00:15:20.946 "base_bdevs_list": [ 00:15:20.946 { 00:15:20.946 "name": "spare", 00:15:20.946 "uuid": "07f196f3-a1e6-576b-b18b-21fba0e319b0", 00:15:20.946 "is_configured": true, 00:15:20.946 "data_offset": 0, 00:15:20.946 "data_size": 65536 00:15:20.946 }, 00:15:20.946 { 00:15:20.946 "name": "BaseBdev2", 00:15:20.946 "uuid": "1b2042a3-1493-59e8-9df6-4e5cc114173c", 00:15:20.946 "is_configured": true, 00:15:20.946 "data_offset": 0, 00:15:20.946 "data_size": 65536 00:15:20.946 }, 00:15:20.946 { 00:15:20.946 "name": "BaseBdev3", 00:15:20.946 "uuid": "da9e24d5-013b-564b-9f75-a6aeab0180ce", 00:15:20.946 "is_configured": true, 00:15:20.946 "data_offset": 0, 00:15:20.946 "data_size": 65536 00:15:20.946 } 00:15:20.946 ] 00:15:20.946 }' 00:15:20.946 15:23:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:20.946 15:23:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.514 15:23:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:21.514 15:23:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.514 15:23:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.514 [2024-11-20 15:23:07.796931] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:21.514 [2024-11-20 15:23:07.796973] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:21.514 [2024-11-20 15:23:07.797066] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:21.514 [2024-11-20 15:23:07.797159] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:21.514 [2024-11-20 15:23:07.797194] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:21.514 15:23:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.514 15:23:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:15:21.514 15:23:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:21.514 15:23:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.514 15:23:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.514 15:23:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.514 15:23:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:21.514 15:23:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:15:21.514 15:23:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:15:21.514 15:23:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:15:21.514 15:23:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:21.514 15:23:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:15:21.514 15:23:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:21.514 15:23:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:21.514 15:23:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:21.514 15:23:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:15:21.514 15:23:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:21.514 15:23:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:21.514 15:23:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:15:21.773 /dev/nbd0 00:15:21.773 15:23:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:21.773 15:23:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:21.773 15:23:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:21.773 15:23:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:15:21.773 15:23:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:21.773 15:23:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:21.773 15:23:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:21.773 15:23:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:15:21.773 15:23:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:21.773 15:23:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:21.773 15:23:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:21.773 1+0 records in 00:15:21.773 1+0 records out 00:15:21.773 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000398874 s, 10.3 MB/s 00:15:21.773 15:23:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:21.773 15:23:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:15:21.773 15:23:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:21.773 15:23:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:21.773 15:23:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:15:21.773 15:23:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:21.773 15:23:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:21.774 15:23:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:15:22.032 /dev/nbd1 00:15:22.032 15:23:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:22.032 15:23:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:22.032 15:23:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:22.032 15:23:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:15:22.032 15:23:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:22.032 15:23:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:22.032 15:23:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:22.032 15:23:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:15:22.032 15:23:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:22.032 15:23:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:22.032 15:23:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:22.032 1+0 records in 00:15:22.032 1+0 records out 00:15:22.032 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000477233 s, 8.6 MB/s 00:15:22.032 15:23:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:22.032 15:23:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:15:22.033 15:23:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:22.033 15:23:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:22.033 15:23:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:15:22.033 15:23:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:22.033 15:23:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:22.033 15:23:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:15:22.292 15:23:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:15:22.292 15:23:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:22.292 15:23:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:22.292 15:23:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:22.292 15:23:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:15:22.292 15:23:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:22.292 15:23:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:22.551 15:23:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:22.551 15:23:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:22.551 15:23:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:22.551 15:23:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:22.551 15:23:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:22.551 15:23:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:22.551 15:23:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:15:22.551 15:23:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:15:22.551 15:23:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:22.551 15:23:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:22.810 15:23:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:22.810 15:23:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:22.810 15:23:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:22.810 15:23:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:22.810 15:23:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:22.810 15:23:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:22.810 15:23:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:15:22.810 15:23:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:15:22.810 15:23:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:15:22.810 15:23:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 81390 00:15:22.810 15:23:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 81390 ']' 00:15:22.810 15:23:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 81390 00:15:22.810 15:23:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:15:22.810 15:23:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:22.810 15:23:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81390 00:15:22.810 killing process with pid 81390 00:15:22.810 Received shutdown signal, test time was about 60.000000 seconds 00:15:22.810 00:15:22.810 Latency(us) 00:15:22.810 [2024-11-20T15:23:09.292Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:22.810 [2024-11-20T15:23:09.292Z] =================================================================================================================== 00:15:22.810 [2024-11-20T15:23:09.292Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:22.810 15:23:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:22.810 15:23:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:22.810 15:23:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81390' 00:15:22.810 15:23:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@973 -- # kill 81390 00:15:22.810 [2024-11-20 15:23:09.101962] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:22.810 15:23:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@978 -- # wait 81390 00:15:23.069 [2024-11-20 15:23:09.506900] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:24.449 15:23:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:15:24.449 00:15:24.449 real 0m15.434s 00:15:24.449 user 0m18.738s 00:15:24.449 sys 0m2.396s 00:15:24.449 15:23:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:24.449 15:23:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.449 ************************************ 00:15:24.449 END TEST raid5f_rebuild_test 00:15:24.449 ************************************ 00:15:24.449 15:23:10 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 3 true false true 00:15:24.449 15:23:10 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:15:24.449 15:23:10 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:24.449 15:23:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:24.449 ************************************ 00:15:24.449 START TEST raid5f_rebuild_test_sb 00:15:24.449 ************************************ 00:15:24.449 15:23:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 3 true false true 00:15:24.449 15:23:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:15:24.449 15:23:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:15:24.449 15:23:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:15:24.449 15:23:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:15:24.449 15:23:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:24.449 15:23:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:24.449 15:23:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:24.449 15:23:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:24.449 15:23:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:24.449 15:23:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:24.449 15:23:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:24.449 15:23:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:24.449 15:23:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:24.449 15:23:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:15:24.449 15:23:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:24.449 15:23:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:24.449 15:23:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:15:24.449 15:23:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:24.449 15:23:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:24.449 15:23:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:24.449 15:23:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:24.449 15:23:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:24.449 15:23:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:24.449 15:23:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:15:24.449 15:23:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:15:24.449 15:23:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:15:24.449 15:23:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:15:24.449 15:23:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:15:24.449 15:23:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:15:24.449 15:23:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=81830 00:15:24.449 15:23:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 81830 00:15:24.449 15:23:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:24.449 15:23:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 81830 ']' 00:15:24.449 15:23:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:24.449 15:23:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:24.449 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:24.449 15:23:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:24.449 15:23:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:24.449 15:23:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:24.449 [2024-11-20 15:23:10.817884] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:15:24.449 [2024-11-20 15:23:10.818022] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81830 ] 00:15:24.449 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:24.449 Zero copy mechanism will not be used. 00:15:24.708 [2024-11-20 15:23:11.001181] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:24.708 [2024-11-20 15:23:11.121945] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:24.967 [2024-11-20 15:23:11.332864] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:24.967 [2024-11-20 15:23:11.332913] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:25.225 15:23:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:25.225 15:23:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:15:25.225 15:23:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:25.225 15:23:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:25.225 15:23:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.225 15:23:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:25.225 BaseBdev1_malloc 00:15:25.225 15:23:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.225 15:23:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:25.225 15:23:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.225 15:23:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:25.485 [2024-11-20 15:23:11.709449] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:25.485 [2024-11-20 15:23:11.709536] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:25.485 [2024-11-20 15:23:11.709563] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:25.485 [2024-11-20 15:23:11.709578] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:25.485 [2024-11-20 15:23:11.712108] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:25.485 [2024-11-20 15:23:11.712158] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:25.485 BaseBdev1 00:15:25.485 15:23:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.485 15:23:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:25.485 15:23:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:25.485 15:23:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.485 15:23:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:25.485 BaseBdev2_malloc 00:15:25.485 15:23:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.485 15:23:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:25.485 15:23:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.485 15:23:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:25.485 [2024-11-20 15:23:11.766096] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:25.485 [2024-11-20 15:23:11.766173] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:25.485 [2024-11-20 15:23:11.766203] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:25.485 [2024-11-20 15:23:11.766217] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:25.485 [2024-11-20 15:23:11.768602] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:25.485 [2024-11-20 15:23:11.768646] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:25.485 BaseBdev2 00:15:25.485 15:23:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.485 15:23:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:25.485 15:23:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:25.485 15:23:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.485 15:23:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:25.485 BaseBdev3_malloc 00:15:25.485 15:23:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.485 15:23:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:15:25.485 15:23:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.485 15:23:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:25.485 [2024-11-20 15:23:11.836086] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:15:25.485 [2024-11-20 15:23:11.836157] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:25.485 [2024-11-20 15:23:11.836184] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:25.485 [2024-11-20 15:23:11.836198] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:25.485 [2024-11-20 15:23:11.838586] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:25.485 [2024-11-20 15:23:11.838635] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:25.485 BaseBdev3 00:15:25.485 15:23:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.485 15:23:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:15:25.485 15:23:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.485 15:23:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:25.485 spare_malloc 00:15:25.485 15:23:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.485 15:23:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:25.485 15:23:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.485 15:23:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:25.485 spare_delay 00:15:25.485 15:23:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.485 15:23:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:25.485 15:23:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.485 15:23:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:25.485 [2024-11-20 15:23:11.903702] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:25.485 [2024-11-20 15:23:11.903777] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:25.485 [2024-11-20 15:23:11.903801] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:15:25.485 [2024-11-20 15:23:11.903815] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:25.485 [2024-11-20 15:23:11.906246] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:25.485 [2024-11-20 15:23:11.906296] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:25.485 spare 00:15:25.485 15:23:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.485 15:23:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:15:25.485 15:23:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.485 15:23:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:25.485 [2024-11-20 15:23:11.915775] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:25.485 [2024-11-20 15:23:11.917844] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:25.486 [2024-11-20 15:23:11.917920] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:25.486 [2024-11-20 15:23:11.918103] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:25.486 [2024-11-20 15:23:11.918123] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:25.486 [2024-11-20 15:23:11.918410] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:25.486 [2024-11-20 15:23:11.924797] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:25.486 [2024-11-20 15:23:11.924830] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:25.486 [2024-11-20 15:23:11.925069] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:25.486 15:23:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.486 15:23:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:25.486 15:23:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:25.486 15:23:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:25.486 15:23:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:25.486 15:23:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:25.486 15:23:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:25.486 15:23:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:25.486 15:23:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:25.486 15:23:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:25.486 15:23:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:25.486 15:23:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:25.486 15:23:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:25.486 15:23:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.486 15:23:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:25.486 15:23:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.745 15:23:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:25.745 "name": "raid_bdev1", 00:15:25.745 "uuid": "a979b1a0-e71e-4352-b77e-1eccf6e7d7fb", 00:15:25.745 "strip_size_kb": 64, 00:15:25.745 "state": "online", 00:15:25.745 "raid_level": "raid5f", 00:15:25.745 "superblock": true, 00:15:25.745 "num_base_bdevs": 3, 00:15:25.745 "num_base_bdevs_discovered": 3, 00:15:25.745 "num_base_bdevs_operational": 3, 00:15:25.745 "base_bdevs_list": [ 00:15:25.745 { 00:15:25.745 "name": "BaseBdev1", 00:15:25.745 "uuid": "97cc40ab-052f-5ec0-9774-7566c92ca0ae", 00:15:25.745 "is_configured": true, 00:15:25.745 "data_offset": 2048, 00:15:25.745 "data_size": 63488 00:15:25.745 }, 00:15:25.745 { 00:15:25.745 "name": "BaseBdev2", 00:15:25.745 "uuid": "e67be42d-ea1c-5948-8476-7f44f5911f87", 00:15:25.745 "is_configured": true, 00:15:25.745 "data_offset": 2048, 00:15:25.745 "data_size": 63488 00:15:25.745 }, 00:15:25.745 { 00:15:25.745 "name": "BaseBdev3", 00:15:25.745 "uuid": "e98e9552-1563-5609-9413-e8376f654e2f", 00:15:25.745 "is_configured": true, 00:15:25.745 "data_offset": 2048, 00:15:25.745 "data_size": 63488 00:15:25.745 } 00:15:25.745 ] 00:15:25.745 }' 00:15:25.745 15:23:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:25.745 15:23:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:26.004 15:23:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:26.004 15:23:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.004 15:23:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:26.004 15:23:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:26.004 [2024-11-20 15:23:12.391216] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:26.004 15:23:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.004 15:23:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=126976 00:15:26.004 15:23:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:26.004 15:23:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:26.004 15:23:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.004 15:23:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:26.004 15:23:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.004 15:23:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:15:26.004 15:23:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:15:26.004 15:23:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:15:26.004 15:23:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:15:26.004 15:23:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:15:26.004 15:23:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:26.004 15:23:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:15:26.004 15:23:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:26.004 15:23:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:26.004 15:23:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:26.004 15:23:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:15:26.004 15:23:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:26.004 15:23:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:26.004 15:23:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:15:26.263 [2024-11-20 15:23:12.666904] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:15:26.263 /dev/nbd0 00:15:26.263 15:23:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:26.263 15:23:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:26.263 15:23:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:26.263 15:23:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:15:26.263 15:23:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:26.263 15:23:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:26.263 15:23:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:26.263 15:23:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:15:26.263 15:23:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:26.263 15:23:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:26.263 15:23:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:26.263 1+0 records in 00:15:26.263 1+0 records out 00:15:26.263 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000494503 s, 8.3 MB/s 00:15:26.263 15:23:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:26.263 15:23:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:15:26.263 15:23:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:26.263 15:23:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:26.264 15:23:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:15:26.264 15:23:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:26.264 15:23:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:26.264 15:23:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:15:26.264 15:23:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:15:26.264 15:23:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 128 00:15:26.264 15:23:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=496 oflag=direct 00:15:26.832 496+0 records in 00:15:26.832 496+0 records out 00:15:26.832 65011712 bytes (65 MB, 62 MiB) copied, 0.386812 s, 168 MB/s 00:15:26.832 15:23:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:26.832 15:23:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:26.832 15:23:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:26.832 15:23:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:26.832 15:23:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:15:26.832 15:23:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:26.832 15:23:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:27.091 15:23:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:27.091 [2024-11-20 15:23:13.364281] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:27.091 15:23:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:27.091 15:23:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:27.091 15:23:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:27.091 15:23:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:27.091 15:23:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:27.091 15:23:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:15:27.091 15:23:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:15:27.091 15:23:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:27.091 15:23:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.091 15:23:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:27.091 [2024-11-20 15:23:13.392290] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:27.091 15:23:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.091 15:23:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:27.091 15:23:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:27.091 15:23:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:27.091 15:23:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:27.091 15:23:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:27.091 15:23:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:27.091 15:23:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:27.091 15:23:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:27.091 15:23:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:27.091 15:23:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:27.091 15:23:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:27.091 15:23:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.091 15:23:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:27.091 15:23:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:27.091 15:23:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.091 15:23:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:27.091 "name": "raid_bdev1", 00:15:27.091 "uuid": "a979b1a0-e71e-4352-b77e-1eccf6e7d7fb", 00:15:27.091 "strip_size_kb": 64, 00:15:27.091 "state": "online", 00:15:27.091 "raid_level": "raid5f", 00:15:27.091 "superblock": true, 00:15:27.091 "num_base_bdevs": 3, 00:15:27.091 "num_base_bdevs_discovered": 2, 00:15:27.091 "num_base_bdevs_operational": 2, 00:15:27.091 "base_bdevs_list": [ 00:15:27.091 { 00:15:27.091 "name": null, 00:15:27.091 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:27.091 "is_configured": false, 00:15:27.091 "data_offset": 0, 00:15:27.091 "data_size": 63488 00:15:27.091 }, 00:15:27.091 { 00:15:27.091 "name": "BaseBdev2", 00:15:27.091 "uuid": "e67be42d-ea1c-5948-8476-7f44f5911f87", 00:15:27.091 "is_configured": true, 00:15:27.091 "data_offset": 2048, 00:15:27.091 "data_size": 63488 00:15:27.091 }, 00:15:27.091 { 00:15:27.091 "name": "BaseBdev3", 00:15:27.091 "uuid": "e98e9552-1563-5609-9413-e8376f654e2f", 00:15:27.091 "is_configured": true, 00:15:27.091 "data_offset": 2048, 00:15:27.091 "data_size": 63488 00:15:27.091 } 00:15:27.091 ] 00:15:27.091 }' 00:15:27.091 15:23:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:27.091 15:23:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:27.659 15:23:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:27.659 15:23:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.659 15:23:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:27.659 [2024-11-20 15:23:13.839757] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:27.659 [2024-11-20 15:23:13.858192] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000028f80 00:15:27.659 15:23:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.659 15:23:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:27.659 [2024-11-20 15:23:13.867400] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:28.596 15:23:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:28.596 15:23:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:28.596 15:23:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:28.596 15:23:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:28.596 15:23:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:28.596 15:23:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:28.596 15:23:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:28.596 15:23:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.596 15:23:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:28.597 15:23:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.597 15:23:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:28.597 "name": "raid_bdev1", 00:15:28.597 "uuid": "a979b1a0-e71e-4352-b77e-1eccf6e7d7fb", 00:15:28.597 "strip_size_kb": 64, 00:15:28.597 "state": "online", 00:15:28.597 "raid_level": "raid5f", 00:15:28.597 "superblock": true, 00:15:28.597 "num_base_bdevs": 3, 00:15:28.597 "num_base_bdevs_discovered": 3, 00:15:28.597 "num_base_bdevs_operational": 3, 00:15:28.597 "process": { 00:15:28.597 "type": "rebuild", 00:15:28.597 "target": "spare", 00:15:28.597 "progress": { 00:15:28.597 "blocks": 20480, 00:15:28.597 "percent": 16 00:15:28.597 } 00:15:28.597 }, 00:15:28.597 "base_bdevs_list": [ 00:15:28.597 { 00:15:28.597 "name": "spare", 00:15:28.597 "uuid": "d115702c-5161-5ba1-9448-1b3a28f574ba", 00:15:28.597 "is_configured": true, 00:15:28.597 "data_offset": 2048, 00:15:28.597 "data_size": 63488 00:15:28.597 }, 00:15:28.597 { 00:15:28.597 "name": "BaseBdev2", 00:15:28.597 "uuid": "e67be42d-ea1c-5948-8476-7f44f5911f87", 00:15:28.597 "is_configured": true, 00:15:28.597 "data_offset": 2048, 00:15:28.597 "data_size": 63488 00:15:28.597 }, 00:15:28.597 { 00:15:28.597 "name": "BaseBdev3", 00:15:28.597 "uuid": "e98e9552-1563-5609-9413-e8376f654e2f", 00:15:28.597 "is_configured": true, 00:15:28.597 "data_offset": 2048, 00:15:28.597 "data_size": 63488 00:15:28.597 } 00:15:28.597 ] 00:15:28.597 }' 00:15:28.597 15:23:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:28.597 15:23:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:28.597 15:23:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:28.597 15:23:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:28.597 15:23:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:28.597 15:23:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.597 15:23:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:28.597 [2024-11-20 15:23:14.999009] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:28.856 [2024-11-20 15:23:15.077795] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:28.856 [2024-11-20 15:23:15.077897] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:28.856 [2024-11-20 15:23:15.077920] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:28.856 [2024-11-20 15:23:15.077930] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:28.856 15:23:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.856 15:23:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:28.856 15:23:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:28.856 15:23:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:28.856 15:23:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:28.856 15:23:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:28.856 15:23:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:28.856 15:23:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:28.856 15:23:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:28.856 15:23:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:28.856 15:23:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:28.856 15:23:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:28.856 15:23:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:28.856 15:23:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.856 15:23:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:28.856 15:23:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.856 15:23:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:28.856 "name": "raid_bdev1", 00:15:28.856 "uuid": "a979b1a0-e71e-4352-b77e-1eccf6e7d7fb", 00:15:28.856 "strip_size_kb": 64, 00:15:28.856 "state": "online", 00:15:28.856 "raid_level": "raid5f", 00:15:28.856 "superblock": true, 00:15:28.856 "num_base_bdevs": 3, 00:15:28.856 "num_base_bdevs_discovered": 2, 00:15:28.856 "num_base_bdevs_operational": 2, 00:15:28.856 "base_bdevs_list": [ 00:15:28.856 { 00:15:28.856 "name": null, 00:15:28.856 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:28.856 "is_configured": false, 00:15:28.856 "data_offset": 0, 00:15:28.856 "data_size": 63488 00:15:28.856 }, 00:15:28.856 { 00:15:28.856 "name": "BaseBdev2", 00:15:28.856 "uuid": "e67be42d-ea1c-5948-8476-7f44f5911f87", 00:15:28.856 "is_configured": true, 00:15:28.856 "data_offset": 2048, 00:15:28.856 "data_size": 63488 00:15:28.856 }, 00:15:28.856 { 00:15:28.856 "name": "BaseBdev3", 00:15:28.856 "uuid": "e98e9552-1563-5609-9413-e8376f654e2f", 00:15:28.856 "is_configured": true, 00:15:28.856 "data_offset": 2048, 00:15:28.856 "data_size": 63488 00:15:28.856 } 00:15:28.856 ] 00:15:28.856 }' 00:15:28.856 15:23:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:28.856 15:23:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:29.160 15:23:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:29.160 15:23:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:29.160 15:23:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:29.160 15:23:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:29.160 15:23:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:29.160 15:23:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:29.160 15:23:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:29.160 15:23:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.160 15:23:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:29.160 15:23:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.160 15:23:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:29.160 "name": "raid_bdev1", 00:15:29.160 "uuid": "a979b1a0-e71e-4352-b77e-1eccf6e7d7fb", 00:15:29.160 "strip_size_kb": 64, 00:15:29.160 "state": "online", 00:15:29.160 "raid_level": "raid5f", 00:15:29.160 "superblock": true, 00:15:29.160 "num_base_bdevs": 3, 00:15:29.160 "num_base_bdevs_discovered": 2, 00:15:29.160 "num_base_bdevs_operational": 2, 00:15:29.160 "base_bdevs_list": [ 00:15:29.160 { 00:15:29.160 "name": null, 00:15:29.160 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:29.160 "is_configured": false, 00:15:29.160 "data_offset": 0, 00:15:29.160 "data_size": 63488 00:15:29.160 }, 00:15:29.160 { 00:15:29.160 "name": "BaseBdev2", 00:15:29.160 "uuid": "e67be42d-ea1c-5948-8476-7f44f5911f87", 00:15:29.160 "is_configured": true, 00:15:29.160 "data_offset": 2048, 00:15:29.160 "data_size": 63488 00:15:29.160 }, 00:15:29.160 { 00:15:29.160 "name": "BaseBdev3", 00:15:29.160 "uuid": "e98e9552-1563-5609-9413-e8376f654e2f", 00:15:29.160 "is_configured": true, 00:15:29.160 "data_offset": 2048, 00:15:29.160 "data_size": 63488 00:15:29.160 } 00:15:29.160 ] 00:15:29.160 }' 00:15:29.160 15:23:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:29.160 15:23:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:29.160 15:23:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:29.160 15:23:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:29.160 15:23:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:29.160 15:23:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.160 15:23:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:29.160 [2024-11-20 15:23:15.626964] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:29.418 [2024-11-20 15:23:15.643384] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000029050 00:15:29.418 15:23:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.418 15:23:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:29.418 [2024-11-20 15:23:15.651453] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:30.356 15:23:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:30.356 15:23:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:30.357 15:23:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:30.357 15:23:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:30.357 15:23:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:30.357 15:23:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:30.357 15:23:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:30.357 15:23:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.357 15:23:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:30.357 15:23:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.357 15:23:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:30.357 "name": "raid_bdev1", 00:15:30.357 "uuid": "a979b1a0-e71e-4352-b77e-1eccf6e7d7fb", 00:15:30.357 "strip_size_kb": 64, 00:15:30.357 "state": "online", 00:15:30.357 "raid_level": "raid5f", 00:15:30.357 "superblock": true, 00:15:30.357 "num_base_bdevs": 3, 00:15:30.357 "num_base_bdevs_discovered": 3, 00:15:30.357 "num_base_bdevs_operational": 3, 00:15:30.357 "process": { 00:15:30.357 "type": "rebuild", 00:15:30.357 "target": "spare", 00:15:30.357 "progress": { 00:15:30.357 "blocks": 20480, 00:15:30.357 "percent": 16 00:15:30.357 } 00:15:30.357 }, 00:15:30.357 "base_bdevs_list": [ 00:15:30.357 { 00:15:30.357 "name": "spare", 00:15:30.357 "uuid": "d115702c-5161-5ba1-9448-1b3a28f574ba", 00:15:30.357 "is_configured": true, 00:15:30.357 "data_offset": 2048, 00:15:30.357 "data_size": 63488 00:15:30.357 }, 00:15:30.357 { 00:15:30.357 "name": "BaseBdev2", 00:15:30.357 "uuid": "e67be42d-ea1c-5948-8476-7f44f5911f87", 00:15:30.357 "is_configured": true, 00:15:30.357 "data_offset": 2048, 00:15:30.357 "data_size": 63488 00:15:30.357 }, 00:15:30.357 { 00:15:30.357 "name": "BaseBdev3", 00:15:30.357 "uuid": "e98e9552-1563-5609-9413-e8376f654e2f", 00:15:30.357 "is_configured": true, 00:15:30.357 "data_offset": 2048, 00:15:30.357 "data_size": 63488 00:15:30.357 } 00:15:30.357 ] 00:15:30.357 }' 00:15:30.357 15:23:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:30.357 15:23:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:30.357 15:23:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:30.357 15:23:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:30.357 15:23:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:15:30.357 15:23:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:15:30.357 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:15:30.357 15:23:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:15:30.357 15:23:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:15:30.357 15:23:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=560 00:15:30.357 15:23:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:30.357 15:23:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:30.357 15:23:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:30.357 15:23:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:30.357 15:23:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:30.357 15:23:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:30.357 15:23:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:30.357 15:23:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:30.357 15:23:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.357 15:23:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:30.357 15:23:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.617 15:23:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:30.617 "name": "raid_bdev1", 00:15:30.617 "uuid": "a979b1a0-e71e-4352-b77e-1eccf6e7d7fb", 00:15:30.617 "strip_size_kb": 64, 00:15:30.617 "state": "online", 00:15:30.617 "raid_level": "raid5f", 00:15:30.617 "superblock": true, 00:15:30.617 "num_base_bdevs": 3, 00:15:30.617 "num_base_bdevs_discovered": 3, 00:15:30.617 "num_base_bdevs_operational": 3, 00:15:30.617 "process": { 00:15:30.617 "type": "rebuild", 00:15:30.617 "target": "spare", 00:15:30.617 "progress": { 00:15:30.617 "blocks": 22528, 00:15:30.617 "percent": 17 00:15:30.617 } 00:15:30.617 }, 00:15:30.617 "base_bdevs_list": [ 00:15:30.617 { 00:15:30.617 "name": "spare", 00:15:30.617 "uuid": "d115702c-5161-5ba1-9448-1b3a28f574ba", 00:15:30.617 "is_configured": true, 00:15:30.617 "data_offset": 2048, 00:15:30.617 "data_size": 63488 00:15:30.617 }, 00:15:30.617 { 00:15:30.617 "name": "BaseBdev2", 00:15:30.617 "uuid": "e67be42d-ea1c-5948-8476-7f44f5911f87", 00:15:30.617 "is_configured": true, 00:15:30.617 "data_offset": 2048, 00:15:30.617 "data_size": 63488 00:15:30.617 }, 00:15:30.617 { 00:15:30.617 "name": "BaseBdev3", 00:15:30.617 "uuid": "e98e9552-1563-5609-9413-e8376f654e2f", 00:15:30.617 "is_configured": true, 00:15:30.617 "data_offset": 2048, 00:15:30.617 "data_size": 63488 00:15:30.617 } 00:15:30.617 ] 00:15:30.617 }' 00:15:30.618 15:23:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:30.618 15:23:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:30.618 15:23:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:30.618 15:23:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:30.618 15:23:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:31.554 15:23:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:31.554 15:23:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:31.554 15:23:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:31.554 15:23:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:31.554 15:23:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:31.554 15:23:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:31.554 15:23:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:31.554 15:23:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.554 15:23:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:31.554 15:23:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:31.554 15:23:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.554 15:23:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:31.554 "name": "raid_bdev1", 00:15:31.554 "uuid": "a979b1a0-e71e-4352-b77e-1eccf6e7d7fb", 00:15:31.554 "strip_size_kb": 64, 00:15:31.554 "state": "online", 00:15:31.554 "raid_level": "raid5f", 00:15:31.554 "superblock": true, 00:15:31.554 "num_base_bdevs": 3, 00:15:31.554 "num_base_bdevs_discovered": 3, 00:15:31.554 "num_base_bdevs_operational": 3, 00:15:31.554 "process": { 00:15:31.554 "type": "rebuild", 00:15:31.554 "target": "spare", 00:15:31.554 "progress": { 00:15:31.554 "blocks": 45056, 00:15:31.554 "percent": 35 00:15:31.554 } 00:15:31.554 }, 00:15:31.554 "base_bdevs_list": [ 00:15:31.554 { 00:15:31.554 "name": "spare", 00:15:31.554 "uuid": "d115702c-5161-5ba1-9448-1b3a28f574ba", 00:15:31.554 "is_configured": true, 00:15:31.554 "data_offset": 2048, 00:15:31.554 "data_size": 63488 00:15:31.554 }, 00:15:31.554 { 00:15:31.554 "name": "BaseBdev2", 00:15:31.554 "uuid": "e67be42d-ea1c-5948-8476-7f44f5911f87", 00:15:31.554 "is_configured": true, 00:15:31.554 "data_offset": 2048, 00:15:31.554 "data_size": 63488 00:15:31.554 }, 00:15:31.554 { 00:15:31.554 "name": "BaseBdev3", 00:15:31.554 "uuid": "e98e9552-1563-5609-9413-e8376f654e2f", 00:15:31.554 "is_configured": true, 00:15:31.554 "data_offset": 2048, 00:15:31.554 "data_size": 63488 00:15:31.554 } 00:15:31.554 ] 00:15:31.554 }' 00:15:31.554 15:23:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:31.813 15:23:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:31.813 15:23:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:31.813 15:23:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:31.813 15:23:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:32.749 15:23:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:32.749 15:23:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:32.749 15:23:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:32.749 15:23:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:32.749 15:23:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:32.749 15:23:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:32.749 15:23:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:32.749 15:23:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.749 15:23:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:32.749 15:23:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:32.749 15:23:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.749 15:23:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:32.749 "name": "raid_bdev1", 00:15:32.749 "uuid": "a979b1a0-e71e-4352-b77e-1eccf6e7d7fb", 00:15:32.749 "strip_size_kb": 64, 00:15:32.749 "state": "online", 00:15:32.749 "raid_level": "raid5f", 00:15:32.749 "superblock": true, 00:15:32.749 "num_base_bdevs": 3, 00:15:32.749 "num_base_bdevs_discovered": 3, 00:15:32.749 "num_base_bdevs_operational": 3, 00:15:32.749 "process": { 00:15:32.749 "type": "rebuild", 00:15:32.749 "target": "spare", 00:15:32.749 "progress": { 00:15:32.749 "blocks": 69632, 00:15:32.749 "percent": 54 00:15:32.749 } 00:15:32.749 }, 00:15:32.749 "base_bdevs_list": [ 00:15:32.749 { 00:15:32.749 "name": "spare", 00:15:32.750 "uuid": "d115702c-5161-5ba1-9448-1b3a28f574ba", 00:15:32.750 "is_configured": true, 00:15:32.750 "data_offset": 2048, 00:15:32.750 "data_size": 63488 00:15:32.750 }, 00:15:32.750 { 00:15:32.750 "name": "BaseBdev2", 00:15:32.750 "uuid": "e67be42d-ea1c-5948-8476-7f44f5911f87", 00:15:32.750 "is_configured": true, 00:15:32.750 "data_offset": 2048, 00:15:32.750 "data_size": 63488 00:15:32.750 }, 00:15:32.750 { 00:15:32.750 "name": "BaseBdev3", 00:15:32.750 "uuid": "e98e9552-1563-5609-9413-e8376f654e2f", 00:15:32.750 "is_configured": true, 00:15:32.750 "data_offset": 2048, 00:15:32.750 "data_size": 63488 00:15:32.750 } 00:15:32.750 ] 00:15:32.750 }' 00:15:32.750 15:23:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:32.750 15:23:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:32.750 15:23:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:32.750 15:23:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:32.750 15:23:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:34.126 15:23:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:34.126 15:23:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:34.126 15:23:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:34.126 15:23:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:34.126 15:23:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:34.126 15:23:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:34.126 15:23:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:34.126 15:23:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.126 15:23:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:34.126 15:23:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:34.126 15:23:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.126 15:23:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:34.126 "name": "raid_bdev1", 00:15:34.126 "uuid": "a979b1a0-e71e-4352-b77e-1eccf6e7d7fb", 00:15:34.126 "strip_size_kb": 64, 00:15:34.126 "state": "online", 00:15:34.126 "raid_level": "raid5f", 00:15:34.126 "superblock": true, 00:15:34.126 "num_base_bdevs": 3, 00:15:34.126 "num_base_bdevs_discovered": 3, 00:15:34.126 "num_base_bdevs_operational": 3, 00:15:34.126 "process": { 00:15:34.126 "type": "rebuild", 00:15:34.126 "target": "spare", 00:15:34.126 "progress": { 00:15:34.126 "blocks": 92160, 00:15:34.126 "percent": 72 00:15:34.126 } 00:15:34.126 }, 00:15:34.126 "base_bdevs_list": [ 00:15:34.126 { 00:15:34.126 "name": "spare", 00:15:34.126 "uuid": "d115702c-5161-5ba1-9448-1b3a28f574ba", 00:15:34.126 "is_configured": true, 00:15:34.126 "data_offset": 2048, 00:15:34.126 "data_size": 63488 00:15:34.126 }, 00:15:34.126 { 00:15:34.126 "name": "BaseBdev2", 00:15:34.126 "uuid": "e67be42d-ea1c-5948-8476-7f44f5911f87", 00:15:34.126 "is_configured": true, 00:15:34.126 "data_offset": 2048, 00:15:34.126 "data_size": 63488 00:15:34.126 }, 00:15:34.126 { 00:15:34.126 "name": "BaseBdev3", 00:15:34.126 "uuid": "e98e9552-1563-5609-9413-e8376f654e2f", 00:15:34.126 "is_configured": true, 00:15:34.126 "data_offset": 2048, 00:15:34.126 "data_size": 63488 00:15:34.126 } 00:15:34.126 ] 00:15:34.126 }' 00:15:34.126 15:23:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:34.126 15:23:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:34.126 15:23:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:34.126 15:23:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:34.126 15:23:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:35.089 15:23:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:35.089 15:23:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:35.089 15:23:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:35.089 15:23:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:35.089 15:23:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:35.089 15:23:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:35.089 15:23:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:35.089 15:23:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.089 15:23:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:35.089 15:23:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:35.089 15:23:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.089 15:23:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:35.089 "name": "raid_bdev1", 00:15:35.089 "uuid": "a979b1a0-e71e-4352-b77e-1eccf6e7d7fb", 00:15:35.089 "strip_size_kb": 64, 00:15:35.089 "state": "online", 00:15:35.089 "raid_level": "raid5f", 00:15:35.089 "superblock": true, 00:15:35.089 "num_base_bdevs": 3, 00:15:35.089 "num_base_bdevs_discovered": 3, 00:15:35.089 "num_base_bdevs_operational": 3, 00:15:35.089 "process": { 00:15:35.089 "type": "rebuild", 00:15:35.089 "target": "spare", 00:15:35.089 "progress": { 00:15:35.089 "blocks": 114688, 00:15:35.089 "percent": 90 00:15:35.089 } 00:15:35.089 }, 00:15:35.089 "base_bdevs_list": [ 00:15:35.089 { 00:15:35.089 "name": "spare", 00:15:35.089 "uuid": "d115702c-5161-5ba1-9448-1b3a28f574ba", 00:15:35.089 "is_configured": true, 00:15:35.089 "data_offset": 2048, 00:15:35.089 "data_size": 63488 00:15:35.089 }, 00:15:35.089 { 00:15:35.089 "name": "BaseBdev2", 00:15:35.089 "uuid": "e67be42d-ea1c-5948-8476-7f44f5911f87", 00:15:35.089 "is_configured": true, 00:15:35.089 "data_offset": 2048, 00:15:35.089 "data_size": 63488 00:15:35.089 }, 00:15:35.089 { 00:15:35.089 "name": "BaseBdev3", 00:15:35.089 "uuid": "e98e9552-1563-5609-9413-e8376f654e2f", 00:15:35.089 "is_configured": true, 00:15:35.089 "data_offset": 2048, 00:15:35.089 "data_size": 63488 00:15:35.089 } 00:15:35.089 ] 00:15:35.089 }' 00:15:35.089 15:23:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:35.089 15:23:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:35.089 15:23:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:35.089 15:23:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:35.089 15:23:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:35.657 [2024-11-20 15:23:21.903609] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:35.657 [2024-11-20 15:23:21.903725] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:35.657 [2024-11-20 15:23:21.903867] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:36.226 15:23:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:36.226 15:23:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:36.226 15:23:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:36.226 15:23:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:36.226 15:23:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:36.226 15:23:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:36.226 15:23:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:36.226 15:23:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.226 15:23:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:36.226 15:23:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.226 15:23:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.226 15:23:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:36.226 "name": "raid_bdev1", 00:15:36.226 "uuid": "a979b1a0-e71e-4352-b77e-1eccf6e7d7fb", 00:15:36.226 "strip_size_kb": 64, 00:15:36.226 "state": "online", 00:15:36.226 "raid_level": "raid5f", 00:15:36.226 "superblock": true, 00:15:36.226 "num_base_bdevs": 3, 00:15:36.226 "num_base_bdevs_discovered": 3, 00:15:36.226 "num_base_bdevs_operational": 3, 00:15:36.226 "base_bdevs_list": [ 00:15:36.226 { 00:15:36.226 "name": "spare", 00:15:36.226 "uuid": "d115702c-5161-5ba1-9448-1b3a28f574ba", 00:15:36.226 "is_configured": true, 00:15:36.226 "data_offset": 2048, 00:15:36.226 "data_size": 63488 00:15:36.226 }, 00:15:36.226 { 00:15:36.226 "name": "BaseBdev2", 00:15:36.226 "uuid": "e67be42d-ea1c-5948-8476-7f44f5911f87", 00:15:36.226 "is_configured": true, 00:15:36.226 "data_offset": 2048, 00:15:36.226 "data_size": 63488 00:15:36.226 }, 00:15:36.226 { 00:15:36.226 "name": "BaseBdev3", 00:15:36.226 "uuid": "e98e9552-1563-5609-9413-e8376f654e2f", 00:15:36.226 "is_configured": true, 00:15:36.226 "data_offset": 2048, 00:15:36.226 "data_size": 63488 00:15:36.226 } 00:15:36.226 ] 00:15:36.226 }' 00:15:36.226 15:23:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:36.226 15:23:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:36.226 15:23:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:36.226 15:23:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:36.226 15:23:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:15:36.226 15:23:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:36.226 15:23:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:36.226 15:23:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:36.226 15:23:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:36.226 15:23:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:36.226 15:23:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:36.226 15:23:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.226 15:23:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.226 15:23:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:36.486 15:23:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.486 15:23:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:36.486 "name": "raid_bdev1", 00:15:36.486 "uuid": "a979b1a0-e71e-4352-b77e-1eccf6e7d7fb", 00:15:36.486 "strip_size_kb": 64, 00:15:36.486 "state": "online", 00:15:36.486 "raid_level": "raid5f", 00:15:36.486 "superblock": true, 00:15:36.486 "num_base_bdevs": 3, 00:15:36.486 "num_base_bdevs_discovered": 3, 00:15:36.486 "num_base_bdevs_operational": 3, 00:15:36.486 "base_bdevs_list": [ 00:15:36.486 { 00:15:36.486 "name": "spare", 00:15:36.486 "uuid": "d115702c-5161-5ba1-9448-1b3a28f574ba", 00:15:36.486 "is_configured": true, 00:15:36.486 "data_offset": 2048, 00:15:36.486 "data_size": 63488 00:15:36.486 }, 00:15:36.486 { 00:15:36.486 "name": "BaseBdev2", 00:15:36.486 "uuid": "e67be42d-ea1c-5948-8476-7f44f5911f87", 00:15:36.486 "is_configured": true, 00:15:36.486 "data_offset": 2048, 00:15:36.486 "data_size": 63488 00:15:36.486 }, 00:15:36.486 { 00:15:36.486 "name": "BaseBdev3", 00:15:36.486 "uuid": "e98e9552-1563-5609-9413-e8376f654e2f", 00:15:36.486 "is_configured": true, 00:15:36.486 "data_offset": 2048, 00:15:36.486 "data_size": 63488 00:15:36.486 } 00:15:36.486 ] 00:15:36.486 }' 00:15:36.486 15:23:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:36.486 15:23:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:36.486 15:23:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:36.486 15:23:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:36.486 15:23:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:36.486 15:23:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:36.486 15:23:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:36.486 15:23:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:36.486 15:23:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:36.486 15:23:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:36.486 15:23:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:36.486 15:23:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:36.486 15:23:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:36.486 15:23:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:36.486 15:23:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:36.486 15:23:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.486 15:23:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:36.486 15:23:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.486 15:23:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.486 15:23:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:36.486 "name": "raid_bdev1", 00:15:36.486 "uuid": "a979b1a0-e71e-4352-b77e-1eccf6e7d7fb", 00:15:36.486 "strip_size_kb": 64, 00:15:36.486 "state": "online", 00:15:36.486 "raid_level": "raid5f", 00:15:36.486 "superblock": true, 00:15:36.486 "num_base_bdevs": 3, 00:15:36.486 "num_base_bdevs_discovered": 3, 00:15:36.486 "num_base_bdevs_operational": 3, 00:15:36.486 "base_bdevs_list": [ 00:15:36.486 { 00:15:36.486 "name": "spare", 00:15:36.486 "uuid": "d115702c-5161-5ba1-9448-1b3a28f574ba", 00:15:36.486 "is_configured": true, 00:15:36.486 "data_offset": 2048, 00:15:36.486 "data_size": 63488 00:15:36.486 }, 00:15:36.486 { 00:15:36.486 "name": "BaseBdev2", 00:15:36.486 "uuid": "e67be42d-ea1c-5948-8476-7f44f5911f87", 00:15:36.486 "is_configured": true, 00:15:36.486 "data_offset": 2048, 00:15:36.486 "data_size": 63488 00:15:36.486 }, 00:15:36.486 { 00:15:36.486 "name": "BaseBdev3", 00:15:36.486 "uuid": "e98e9552-1563-5609-9413-e8376f654e2f", 00:15:36.486 "is_configured": true, 00:15:36.486 "data_offset": 2048, 00:15:36.486 "data_size": 63488 00:15:36.486 } 00:15:36.486 ] 00:15:36.486 }' 00:15:36.486 15:23:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:36.486 15:23:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.054 15:23:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:37.054 15:23:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.054 15:23:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.054 [2024-11-20 15:23:23.287789] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:37.054 [2024-11-20 15:23:23.287840] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:37.054 [2024-11-20 15:23:23.287939] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:37.054 [2024-11-20 15:23:23.288039] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:37.054 [2024-11-20 15:23:23.288065] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:37.054 15:23:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.054 15:23:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:37.054 15:23:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.054 15:23:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.054 15:23:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:15:37.054 15:23:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.054 15:23:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:37.054 15:23:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:15:37.054 15:23:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:15:37.054 15:23:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:15:37.054 15:23:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:37.054 15:23:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:15:37.054 15:23:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:37.054 15:23:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:37.054 15:23:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:37.054 15:23:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:15:37.054 15:23:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:37.054 15:23:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:37.054 15:23:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:15:37.313 /dev/nbd0 00:15:37.313 15:23:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:37.313 15:23:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:37.313 15:23:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:37.313 15:23:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:15:37.313 15:23:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:37.313 15:23:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:37.313 15:23:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:37.313 15:23:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:15:37.313 15:23:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:37.313 15:23:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:37.313 15:23:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:37.313 1+0 records in 00:15:37.313 1+0 records out 00:15:37.313 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000453028 s, 9.0 MB/s 00:15:37.313 15:23:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:37.313 15:23:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:15:37.313 15:23:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:37.313 15:23:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:37.313 15:23:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:15:37.313 15:23:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:37.313 15:23:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:37.313 15:23:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:15:37.572 /dev/nbd1 00:15:37.572 15:23:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:37.572 15:23:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:37.572 15:23:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:37.572 15:23:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:15:37.572 15:23:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:37.572 15:23:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:37.572 15:23:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:37.572 15:23:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:15:37.572 15:23:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:37.572 15:23:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:37.572 15:23:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:37.572 1+0 records in 00:15:37.572 1+0 records out 00:15:37.572 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000498556 s, 8.2 MB/s 00:15:37.572 15:23:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:37.572 15:23:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:15:37.572 15:23:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:37.572 15:23:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:37.572 15:23:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:15:37.572 15:23:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:37.572 15:23:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:37.572 15:23:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:15:37.830 15:23:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:15:37.830 15:23:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:37.830 15:23:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:37.830 15:23:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:37.830 15:23:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:15:37.830 15:23:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:37.830 15:23:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:38.089 15:23:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:38.089 15:23:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:38.089 15:23:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:38.089 15:23:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:38.089 15:23:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:38.089 15:23:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:38.089 15:23:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:15:38.089 15:23:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:15:38.089 15:23:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:38.089 15:23:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:38.089 15:23:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:38.089 15:23:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:38.089 15:23:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:38.089 15:23:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:38.089 15:23:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:38.089 15:23:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:38.348 15:23:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:15:38.348 15:23:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:15:38.348 15:23:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:15:38.348 15:23:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:15:38.348 15:23:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.348 15:23:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.348 15:23:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.348 15:23:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:38.348 15:23:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.348 15:23:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.348 [2024-11-20 15:23:24.588226] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:38.348 [2024-11-20 15:23:24.588308] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:38.348 [2024-11-20 15:23:24.588333] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:15:38.348 [2024-11-20 15:23:24.588348] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:38.348 [2024-11-20 15:23:24.591026] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:38.348 [2024-11-20 15:23:24.591075] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:38.348 [2024-11-20 15:23:24.591176] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:38.348 [2024-11-20 15:23:24.591238] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:38.348 [2024-11-20 15:23:24.591407] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:38.348 [2024-11-20 15:23:24.591509] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:38.348 spare 00:15:38.348 15:23:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.348 15:23:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:15:38.348 15:23:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.348 15:23:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.348 [2024-11-20 15:23:24.691465] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:15:38.348 [2024-11-20 15:23:24.691538] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:38.348 [2024-11-20 15:23:24.691941] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000047700 00:15:38.348 [2024-11-20 15:23:24.698246] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:15:38.348 [2024-11-20 15:23:24.698280] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:15:38.348 [2024-11-20 15:23:24.698536] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:38.348 15:23:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.348 15:23:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:38.348 15:23:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:38.348 15:23:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:38.348 15:23:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:38.348 15:23:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:38.348 15:23:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:38.348 15:23:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:38.348 15:23:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:38.348 15:23:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:38.348 15:23:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:38.348 15:23:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:38.348 15:23:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:38.348 15:23:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.348 15:23:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.348 15:23:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.348 15:23:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:38.348 "name": "raid_bdev1", 00:15:38.348 "uuid": "a979b1a0-e71e-4352-b77e-1eccf6e7d7fb", 00:15:38.348 "strip_size_kb": 64, 00:15:38.348 "state": "online", 00:15:38.348 "raid_level": "raid5f", 00:15:38.348 "superblock": true, 00:15:38.348 "num_base_bdevs": 3, 00:15:38.348 "num_base_bdevs_discovered": 3, 00:15:38.348 "num_base_bdevs_operational": 3, 00:15:38.348 "base_bdevs_list": [ 00:15:38.348 { 00:15:38.348 "name": "spare", 00:15:38.348 "uuid": "d115702c-5161-5ba1-9448-1b3a28f574ba", 00:15:38.348 "is_configured": true, 00:15:38.348 "data_offset": 2048, 00:15:38.348 "data_size": 63488 00:15:38.348 }, 00:15:38.348 { 00:15:38.348 "name": "BaseBdev2", 00:15:38.349 "uuid": "e67be42d-ea1c-5948-8476-7f44f5911f87", 00:15:38.349 "is_configured": true, 00:15:38.349 "data_offset": 2048, 00:15:38.349 "data_size": 63488 00:15:38.349 }, 00:15:38.349 { 00:15:38.349 "name": "BaseBdev3", 00:15:38.349 "uuid": "e98e9552-1563-5609-9413-e8376f654e2f", 00:15:38.349 "is_configured": true, 00:15:38.349 "data_offset": 2048, 00:15:38.349 "data_size": 63488 00:15:38.349 } 00:15:38.349 ] 00:15:38.349 }' 00:15:38.349 15:23:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:38.349 15:23:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.916 15:23:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:38.916 15:23:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:38.916 15:23:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:38.916 15:23:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:38.916 15:23:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:38.916 15:23:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:38.916 15:23:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.916 15:23:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:38.916 15:23:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.916 15:23:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.916 15:23:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:38.916 "name": "raid_bdev1", 00:15:38.916 "uuid": "a979b1a0-e71e-4352-b77e-1eccf6e7d7fb", 00:15:38.916 "strip_size_kb": 64, 00:15:38.916 "state": "online", 00:15:38.916 "raid_level": "raid5f", 00:15:38.916 "superblock": true, 00:15:38.916 "num_base_bdevs": 3, 00:15:38.916 "num_base_bdevs_discovered": 3, 00:15:38.916 "num_base_bdevs_operational": 3, 00:15:38.916 "base_bdevs_list": [ 00:15:38.916 { 00:15:38.916 "name": "spare", 00:15:38.916 "uuid": "d115702c-5161-5ba1-9448-1b3a28f574ba", 00:15:38.916 "is_configured": true, 00:15:38.916 "data_offset": 2048, 00:15:38.916 "data_size": 63488 00:15:38.916 }, 00:15:38.916 { 00:15:38.916 "name": "BaseBdev2", 00:15:38.916 "uuid": "e67be42d-ea1c-5948-8476-7f44f5911f87", 00:15:38.916 "is_configured": true, 00:15:38.916 "data_offset": 2048, 00:15:38.916 "data_size": 63488 00:15:38.916 }, 00:15:38.916 { 00:15:38.916 "name": "BaseBdev3", 00:15:38.916 "uuid": "e98e9552-1563-5609-9413-e8376f654e2f", 00:15:38.916 "is_configured": true, 00:15:38.916 "data_offset": 2048, 00:15:38.916 "data_size": 63488 00:15:38.916 } 00:15:38.916 ] 00:15:38.916 }' 00:15:38.916 15:23:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:38.916 15:23:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:38.916 15:23:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:38.916 15:23:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:38.916 15:23:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:15:38.916 15:23:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:38.916 15:23:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.916 15:23:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.916 15:23:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.916 15:23:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:15:38.916 15:23:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:38.916 15:23:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.916 15:23:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.916 [2024-11-20 15:23:25.320523] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:38.916 15:23:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.916 15:23:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:38.916 15:23:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:38.916 15:23:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:38.916 15:23:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:38.916 15:23:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:38.916 15:23:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:38.916 15:23:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:38.916 15:23:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:38.916 15:23:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:38.916 15:23:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:38.916 15:23:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:38.916 15:23:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:38.916 15:23:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.916 15:23:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.916 15:23:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.916 15:23:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:38.916 "name": "raid_bdev1", 00:15:38.916 "uuid": "a979b1a0-e71e-4352-b77e-1eccf6e7d7fb", 00:15:38.916 "strip_size_kb": 64, 00:15:38.916 "state": "online", 00:15:38.916 "raid_level": "raid5f", 00:15:38.916 "superblock": true, 00:15:38.916 "num_base_bdevs": 3, 00:15:38.916 "num_base_bdevs_discovered": 2, 00:15:38.916 "num_base_bdevs_operational": 2, 00:15:38.916 "base_bdevs_list": [ 00:15:38.916 { 00:15:38.916 "name": null, 00:15:38.916 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:38.916 "is_configured": false, 00:15:38.916 "data_offset": 0, 00:15:38.916 "data_size": 63488 00:15:38.916 }, 00:15:38.916 { 00:15:38.916 "name": "BaseBdev2", 00:15:38.916 "uuid": "e67be42d-ea1c-5948-8476-7f44f5911f87", 00:15:38.916 "is_configured": true, 00:15:38.916 "data_offset": 2048, 00:15:38.916 "data_size": 63488 00:15:38.916 }, 00:15:38.916 { 00:15:38.916 "name": "BaseBdev3", 00:15:38.916 "uuid": "e98e9552-1563-5609-9413-e8376f654e2f", 00:15:38.916 "is_configured": true, 00:15:38.916 "data_offset": 2048, 00:15:38.916 "data_size": 63488 00:15:38.916 } 00:15:38.916 ] 00:15:38.916 }' 00:15:38.916 15:23:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:38.916 15:23:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.483 15:23:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:39.483 15:23:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.483 15:23:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.483 [2024-11-20 15:23:25.708001] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:39.483 [2024-11-20 15:23:25.708191] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:15:39.483 [2024-11-20 15:23:25.708210] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:39.483 [2024-11-20 15:23:25.708258] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:39.483 [2024-11-20 15:23:25.724989] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000477d0 00:15:39.483 15:23:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.483 15:23:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:15:39.483 [2024-11-20 15:23:25.733030] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:40.419 15:23:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:40.419 15:23:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:40.419 15:23:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:40.419 15:23:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:40.419 15:23:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:40.419 15:23:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:40.419 15:23:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.419 15:23:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:40.419 15:23:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:40.419 15:23:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.419 15:23:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:40.419 "name": "raid_bdev1", 00:15:40.419 "uuid": "a979b1a0-e71e-4352-b77e-1eccf6e7d7fb", 00:15:40.419 "strip_size_kb": 64, 00:15:40.419 "state": "online", 00:15:40.419 "raid_level": "raid5f", 00:15:40.419 "superblock": true, 00:15:40.419 "num_base_bdevs": 3, 00:15:40.419 "num_base_bdevs_discovered": 3, 00:15:40.419 "num_base_bdevs_operational": 3, 00:15:40.419 "process": { 00:15:40.419 "type": "rebuild", 00:15:40.419 "target": "spare", 00:15:40.419 "progress": { 00:15:40.419 "blocks": 20480, 00:15:40.419 "percent": 16 00:15:40.419 } 00:15:40.419 }, 00:15:40.419 "base_bdevs_list": [ 00:15:40.419 { 00:15:40.419 "name": "spare", 00:15:40.419 "uuid": "d115702c-5161-5ba1-9448-1b3a28f574ba", 00:15:40.419 "is_configured": true, 00:15:40.419 "data_offset": 2048, 00:15:40.419 "data_size": 63488 00:15:40.419 }, 00:15:40.419 { 00:15:40.419 "name": "BaseBdev2", 00:15:40.419 "uuid": "e67be42d-ea1c-5948-8476-7f44f5911f87", 00:15:40.419 "is_configured": true, 00:15:40.419 "data_offset": 2048, 00:15:40.419 "data_size": 63488 00:15:40.419 }, 00:15:40.419 { 00:15:40.419 "name": "BaseBdev3", 00:15:40.419 "uuid": "e98e9552-1563-5609-9413-e8376f654e2f", 00:15:40.419 "is_configured": true, 00:15:40.419 "data_offset": 2048, 00:15:40.419 "data_size": 63488 00:15:40.419 } 00:15:40.419 ] 00:15:40.419 }' 00:15:40.419 15:23:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:40.419 15:23:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:40.419 15:23:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:40.419 15:23:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:40.419 15:23:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:15:40.419 15:23:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.419 15:23:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:40.419 [2024-11-20 15:23:26.864729] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:40.678 [2024-11-20 15:23:26.943296] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:40.678 [2024-11-20 15:23:26.943398] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:40.678 [2024-11-20 15:23:26.943417] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:40.678 [2024-11-20 15:23:26.943429] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:40.678 15:23:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.678 15:23:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:40.678 15:23:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:40.678 15:23:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:40.678 15:23:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:40.678 15:23:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:40.678 15:23:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:40.678 15:23:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:40.678 15:23:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:40.678 15:23:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:40.678 15:23:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:40.678 15:23:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:40.678 15:23:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:40.678 15:23:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.678 15:23:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:40.678 15:23:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.678 15:23:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:40.678 "name": "raid_bdev1", 00:15:40.678 "uuid": "a979b1a0-e71e-4352-b77e-1eccf6e7d7fb", 00:15:40.678 "strip_size_kb": 64, 00:15:40.678 "state": "online", 00:15:40.678 "raid_level": "raid5f", 00:15:40.678 "superblock": true, 00:15:40.678 "num_base_bdevs": 3, 00:15:40.678 "num_base_bdevs_discovered": 2, 00:15:40.678 "num_base_bdevs_operational": 2, 00:15:40.678 "base_bdevs_list": [ 00:15:40.678 { 00:15:40.678 "name": null, 00:15:40.678 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:40.678 "is_configured": false, 00:15:40.678 "data_offset": 0, 00:15:40.678 "data_size": 63488 00:15:40.678 }, 00:15:40.678 { 00:15:40.678 "name": "BaseBdev2", 00:15:40.678 "uuid": "e67be42d-ea1c-5948-8476-7f44f5911f87", 00:15:40.678 "is_configured": true, 00:15:40.678 "data_offset": 2048, 00:15:40.678 "data_size": 63488 00:15:40.678 }, 00:15:40.678 { 00:15:40.678 "name": "BaseBdev3", 00:15:40.678 "uuid": "e98e9552-1563-5609-9413-e8376f654e2f", 00:15:40.678 "is_configured": true, 00:15:40.678 "data_offset": 2048, 00:15:40.678 "data_size": 63488 00:15:40.678 } 00:15:40.678 ] 00:15:40.678 }' 00:15:40.678 15:23:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:40.678 15:23:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:40.936 15:23:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:40.936 15:23:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.936 15:23:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:40.936 [2024-11-20 15:23:27.412052] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:40.936 [2024-11-20 15:23:27.412303] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:40.936 [2024-11-20 15:23:27.412338] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:15:40.936 [2024-11-20 15:23:27.412358] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:40.936 [2024-11-20 15:23:27.412936] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:40.936 [2024-11-20 15:23:27.412964] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:40.936 [2024-11-20 15:23:27.413073] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:40.936 [2024-11-20 15:23:27.413094] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:15:40.936 [2024-11-20 15:23:27.413107] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:40.936 [2024-11-20 15:23:27.413135] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:41.195 [2024-11-20 15:23:27.429741] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000478a0 00:15:41.195 spare 00:15:41.195 15:23:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.195 15:23:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:15:41.195 [2024-11-20 15:23:27.438150] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:42.140 15:23:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:42.140 15:23:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:42.140 15:23:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:42.140 15:23:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:42.140 15:23:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:42.140 15:23:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:42.140 15:23:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:42.140 15:23:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.140 15:23:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:42.140 15:23:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.140 15:23:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:42.140 "name": "raid_bdev1", 00:15:42.140 "uuid": "a979b1a0-e71e-4352-b77e-1eccf6e7d7fb", 00:15:42.140 "strip_size_kb": 64, 00:15:42.140 "state": "online", 00:15:42.140 "raid_level": "raid5f", 00:15:42.140 "superblock": true, 00:15:42.140 "num_base_bdevs": 3, 00:15:42.140 "num_base_bdevs_discovered": 3, 00:15:42.140 "num_base_bdevs_operational": 3, 00:15:42.140 "process": { 00:15:42.140 "type": "rebuild", 00:15:42.140 "target": "spare", 00:15:42.140 "progress": { 00:15:42.140 "blocks": 20480, 00:15:42.140 "percent": 16 00:15:42.140 } 00:15:42.140 }, 00:15:42.141 "base_bdevs_list": [ 00:15:42.141 { 00:15:42.141 "name": "spare", 00:15:42.141 "uuid": "d115702c-5161-5ba1-9448-1b3a28f574ba", 00:15:42.141 "is_configured": true, 00:15:42.141 "data_offset": 2048, 00:15:42.141 "data_size": 63488 00:15:42.141 }, 00:15:42.141 { 00:15:42.141 "name": "BaseBdev2", 00:15:42.141 "uuid": "e67be42d-ea1c-5948-8476-7f44f5911f87", 00:15:42.141 "is_configured": true, 00:15:42.141 "data_offset": 2048, 00:15:42.141 "data_size": 63488 00:15:42.141 }, 00:15:42.141 { 00:15:42.141 "name": "BaseBdev3", 00:15:42.141 "uuid": "e98e9552-1563-5609-9413-e8376f654e2f", 00:15:42.141 "is_configured": true, 00:15:42.141 "data_offset": 2048, 00:15:42.141 "data_size": 63488 00:15:42.141 } 00:15:42.141 ] 00:15:42.141 }' 00:15:42.141 15:23:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:42.141 15:23:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:42.141 15:23:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:42.141 15:23:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:42.141 15:23:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:15:42.141 15:23:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.141 15:23:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:42.141 [2024-11-20 15:23:28.601866] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:42.400 [2024-11-20 15:23:28.648627] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:42.400 [2024-11-20 15:23:28.649026] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:42.400 [2024-11-20 15:23:28.649141] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:42.400 [2024-11-20 15:23:28.649183] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:42.400 15:23:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.400 15:23:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:42.400 15:23:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:42.400 15:23:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:42.400 15:23:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:42.400 15:23:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:42.400 15:23:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:42.400 15:23:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:42.400 15:23:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:42.400 15:23:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:42.400 15:23:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:42.400 15:23:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:42.400 15:23:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:42.400 15:23:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.400 15:23:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:42.400 15:23:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.400 15:23:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:42.400 "name": "raid_bdev1", 00:15:42.400 "uuid": "a979b1a0-e71e-4352-b77e-1eccf6e7d7fb", 00:15:42.400 "strip_size_kb": 64, 00:15:42.400 "state": "online", 00:15:42.400 "raid_level": "raid5f", 00:15:42.400 "superblock": true, 00:15:42.400 "num_base_bdevs": 3, 00:15:42.400 "num_base_bdevs_discovered": 2, 00:15:42.400 "num_base_bdevs_operational": 2, 00:15:42.400 "base_bdevs_list": [ 00:15:42.400 { 00:15:42.400 "name": null, 00:15:42.400 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:42.400 "is_configured": false, 00:15:42.400 "data_offset": 0, 00:15:42.400 "data_size": 63488 00:15:42.400 }, 00:15:42.400 { 00:15:42.400 "name": "BaseBdev2", 00:15:42.400 "uuid": "e67be42d-ea1c-5948-8476-7f44f5911f87", 00:15:42.400 "is_configured": true, 00:15:42.400 "data_offset": 2048, 00:15:42.400 "data_size": 63488 00:15:42.400 }, 00:15:42.400 { 00:15:42.400 "name": "BaseBdev3", 00:15:42.400 "uuid": "e98e9552-1563-5609-9413-e8376f654e2f", 00:15:42.400 "is_configured": true, 00:15:42.400 "data_offset": 2048, 00:15:42.400 "data_size": 63488 00:15:42.400 } 00:15:42.400 ] 00:15:42.400 }' 00:15:42.400 15:23:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:42.400 15:23:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:42.659 15:23:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:42.659 15:23:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:42.659 15:23:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:42.659 15:23:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:42.659 15:23:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:42.659 15:23:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:42.659 15:23:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:42.659 15:23:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.659 15:23:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:42.926 15:23:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.926 15:23:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:42.926 "name": "raid_bdev1", 00:15:42.926 "uuid": "a979b1a0-e71e-4352-b77e-1eccf6e7d7fb", 00:15:42.926 "strip_size_kb": 64, 00:15:42.926 "state": "online", 00:15:42.927 "raid_level": "raid5f", 00:15:42.927 "superblock": true, 00:15:42.927 "num_base_bdevs": 3, 00:15:42.927 "num_base_bdevs_discovered": 2, 00:15:42.927 "num_base_bdevs_operational": 2, 00:15:42.927 "base_bdevs_list": [ 00:15:42.927 { 00:15:42.927 "name": null, 00:15:42.927 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:42.927 "is_configured": false, 00:15:42.927 "data_offset": 0, 00:15:42.927 "data_size": 63488 00:15:42.927 }, 00:15:42.927 { 00:15:42.927 "name": "BaseBdev2", 00:15:42.927 "uuid": "e67be42d-ea1c-5948-8476-7f44f5911f87", 00:15:42.927 "is_configured": true, 00:15:42.927 "data_offset": 2048, 00:15:42.927 "data_size": 63488 00:15:42.927 }, 00:15:42.927 { 00:15:42.927 "name": "BaseBdev3", 00:15:42.927 "uuid": "e98e9552-1563-5609-9413-e8376f654e2f", 00:15:42.927 "is_configured": true, 00:15:42.927 "data_offset": 2048, 00:15:42.927 "data_size": 63488 00:15:42.927 } 00:15:42.927 ] 00:15:42.927 }' 00:15:42.927 15:23:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:42.927 15:23:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:42.927 15:23:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:42.927 15:23:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:42.927 15:23:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:15:42.927 15:23:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.927 15:23:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:42.927 15:23:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.927 15:23:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:42.927 15:23:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.927 15:23:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:42.927 [2024-11-20 15:23:29.279602] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:42.927 [2024-11-20 15:23:29.279696] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:42.927 [2024-11-20 15:23:29.279730] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:15:42.927 [2024-11-20 15:23:29.279744] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:42.927 [2024-11-20 15:23:29.280277] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:42.927 [2024-11-20 15:23:29.280299] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:42.927 [2024-11-20 15:23:29.280398] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:15:42.927 [2024-11-20 15:23:29.280415] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:15:42.927 [2024-11-20 15:23:29.280444] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:42.927 [2024-11-20 15:23:29.280459] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:15:42.927 BaseBdev1 00:15:42.927 15:23:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.927 15:23:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:15:43.863 15:23:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:43.863 15:23:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:43.863 15:23:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:43.863 15:23:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:43.863 15:23:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:43.863 15:23:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:43.863 15:23:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:43.863 15:23:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:43.863 15:23:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:43.863 15:23:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:43.863 15:23:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:43.863 15:23:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.863 15:23:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:43.863 15:23:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.863 15:23:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.863 15:23:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:43.863 "name": "raid_bdev1", 00:15:43.863 "uuid": "a979b1a0-e71e-4352-b77e-1eccf6e7d7fb", 00:15:43.863 "strip_size_kb": 64, 00:15:43.863 "state": "online", 00:15:43.863 "raid_level": "raid5f", 00:15:43.863 "superblock": true, 00:15:43.863 "num_base_bdevs": 3, 00:15:43.863 "num_base_bdevs_discovered": 2, 00:15:43.863 "num_base_bdevs_operational": 2, 00:15:43.863 "base_bdevs_list": [ 00:15:43.863 { 00:15:43.863 "name": null, 00:15:43.863 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:43.863 "is_configured": false, 00:15:43.863 "data_offset": 0, 00:15:43.863 "data_size": 63488 00:15:43.863 }, 00:15:43.863 { 00:15:43.863 "name": "BaseBdev2", 00:15:43.863 "uuid": "e67be42d-ea1c-5948-8476-7f44f5911f87", 00:15:43.863 "is_configured": true, 00:15:43.863 "data_offset": 2048, 00:15:43.863 "data_size": 63488 00:15:43.863 }, 00:15:43.863 { 00:15:43.863 "name": "BaseBdev3", 00:15:43.863 "uuid": "e98e9552-1563-5609-9413-e8376f654e2f", 00:15:43.864 "is_configured": true, 00:15:43.864 "data_offset": 2048, 00:15:43.864 "data_size": 63488 00:15:43.864 } 00:15:43.864 ] 00:15:43.864 }' 00:15:43.864 15:23:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:43.864 15:23:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:44.432 15:23:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:44.432 15:23:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:44.432 15:23:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:44.432 15:23:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:44.432 15:23:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:44.432 15:23:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:44.432 15:23:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.432 15:23:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:44.432 15:23:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:44.432 15:23:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.432 15:23:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:44.432 "name": "raid_bdev1", 00:15:44.432 "uuid": "a979b1a0-e71e-4352-b77e-1eccf6e7d7fb", 00:15:44.432 "strip_size_kb": 64, 00:15:44.432 "state": "online", 00:15:44.432 "raid_level": "raid5f", 00:15:44.432 "superblock": true, 00:15:44.432 "num_base_bdevs": 3, 00:15:44.432 "num_base_bdevs_discovered": 2, 00:15:44.432 "num_base_bdevs_operational": 2, 00:15:44.432 "base_bdevs_list": [ 00:15:44.432 { 00:15:44.432 "name": null, 00:15:44.432 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:44.432 "is_configured": false, 00:15:44.432 "data_offset": 0, 00:15:44.432 "data_size": 63488 00:15:44.432 }, 00:15:44.432 { 00:15:44.432 "name": "BaseBdev2", 00:15:44.432 "uuid": "e67be42d-ea1c-5948-8476-7f44f5911f87", 00:15:44.432 "is_configured": true, 00:15:44.432 "data_offset": 2048, 00:15:44.432 "data_size": 63488 00:15:44.432 }, 00:15:44.432 { 00:15:44.432 "name": "BaseBdev3", 00:15:44.432 "uuid": "e98e9552-1563-5609-9413-e8376f654e2f", 00:15:44.432 "is_configured": true, 00:15:44.432 "data_offset": 2048, 00:15:44.432 "data_size": 63488 00:15:44.432 } 00:15:44.432 ] 00:15:44.432 }' 00:15:44.432 15:23:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:44.432 15:23:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:44.432 15:23:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:44.432 15:23:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:44.432 15:23:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:44.432 15:23:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:15:44.432 15:23:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:44.432 15:23:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:15:44.432 15:23:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:44.432 15:23:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:15:44.432 15:23:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:44.432 15:23:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:44.432 15:23:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.432 15:23:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:44.432 [2024-11-20 15:23:30.889599] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:44.432 [2024-11-20 15:23:30.889794] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:15:44.432 [2024-11-20 15:23:30.889814] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:44.432 request: 00:15:44.432 { 00:15:44.432 "base_bdev": "BaseBdev1", 00:15:44.432 "raid_bdev": "raid_bdev1", 00:15:44.432 "method": "bdev_raid_add_base_bdev", 00:15:44.432 "req_id": 1 00:15:44.432 } 00:15:44.432 Got JSON-RPC error response 00:15:44.432 response: 00:15:44.432 { 00:15:44.432 "code": -22, 00:15:44.432 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:15:44.432 } 00:15:44.432 15:23:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:15:44.432 15:23:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:15:44.432 15:23:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:44.432 15:23:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:44.432 15:23:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:44.432 15:23:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:15:45.809 15:23:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:45.809 15:23:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:45.809 15:23:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:45.809 15:23:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:45.809 15:23:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:45.809 15:23:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:45.809 15:23:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:45.809 15:23:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:45.809 15:23:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:45.809 15:23:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:45.809 15:23:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:45.809 15:23:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.809 15:23:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:45.809 15:23:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:45.809 15:23:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.809 15:23:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:45.809 "name": "raid_bdev1", 00:15:45.809 "uuid": "a979b1a0-e71e-4352-b77e-1eccf6e7d7fb", 00:15:45.809 "strip_size_kb": 64, 00:15:45.809 "state": "online", 00:15:45.809 "raid_level": "raid5f", 00:15:45.809 "superblock": true, 00:15:45.809 "num_base_bdevs": 3, 00:15:45.809 "num_base_bdevs_discovered": 2, 00:15:45.809 "num_base_bdevs_operational": 2, 00:15:45.809 "base_bdevs_list": [ 00:15:45.810 { 00:15:45.810 "name": null, 00:15:45.810 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:45.810 "is_configured": false, 00:15:45.810 "data_offset": 0, 00:15:45.810 "data_size": 63488 00:15:45.810 }, 00:15:45.810 { 00:15:45.810 "name": "BaseBdev2", 00:15:45.810 "uuid": "e67be42d-ea1c-5948-8476-7f44f5911f87", 00:15:45.810 "is_configured": true, 00:15:45.810 "data_offset": 2048, 00:15:45.810 "data_size": 63488 00:15:45.810 }, 00:15:45.810 { 00:15:45.810 "name": "BaseBdev3", 00:15:45.810 "uuid": "e98e9552-1563-5609-9413-e8376f654e2f", 00:15:45.810 "is_configured": true, 00:15:45.810 "data_offset": 2048, 00:15:45.810 "data_size": 63488 00:15:45.810 } 00:15:45.810 ] 00:15:45.810 }' 00:15:45.810 15:23:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:45.810 15:23:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.069 15:23:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:46.069 15:23:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:46.069 15:23:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:46.069 15:23:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:46.069 15:23:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:46.069 15:23:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:46.069 15:23:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.069 15:23:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.069 15:23:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:46.069 15:23:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.069 15:23:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:46.069 "name": "raid_bdev1", 00:15:46.069 "uuid": "a979b1a0-e71e-4352-b77e-1eccf6e7d7fb", 00:15:46.069 "strip_size_kb": 64, 00:15:46.069 "state": "online", 00:15:46.069 "raid_level": "raid5f", 00:15:46.069 "superblock": true, 00:15:46.069 "num_base_bdevs": 3, 00:15:46.069 "num_base_bdevs_discovered": 2, 00:15:46.069 "num_base_bdevs_operational": 2, 00:15:46.069 "base_bdevs_list": [ 00:15:46.069 { 00:15:46.069 "name": null, 00:15:46.069 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:46.069 "is_configured": false, 00:15:46.069 "data_offset": 0, 00:15:46.069 "data_size": 63488 00:15:46.069 }, 00:15:46.069 { 00:15:46.069 "name": "BaseBdev2", 00:15:46.069 "uuid": "e67be42d-ea1c-5948-8476-7f44f5911f87", 00:15:46.069 "is_configured": true, 00:15:46.069 "data_offset": 2048, 00:15:46.069 "data_size": 63488 00:15:46.069 }, 00:15:46.069 { 00:15:46.069 "name": "BaseBdev3", 00:15:46.069 "uuid": "e98e9552-1563-5609-9413-e8376f654e2f", 00:15:46.069 "is_configured": true, 00:15:46.069 "data_offset": 2048, 00:15:46.069 "data_size": 63488 00:15:46.069 } 00:15:46.069 ] 00:15:46.069 }' 00:15:46.069 15:23:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:46.069 15:23:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:46.069 15:23:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:46.069 15:23:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:46.069 15:23:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 81830 00:15:46.069 15:23:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 81830 ']' 00:15:46.069 15:23:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 81830 00:15:46.069 15:23:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:15:46.069 15:23:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:46.069 15:23:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81830 00:15:46.069 killing process with pid 81830 00:15:46.069 Received shutdown signal, test time was about 60.000000 seconds 00:15:46.069 00:15:46.069 Latency(us) 00:15:46.069 [2024-11-20T15:23:32.551Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:46.069 [2024-11-20T15:23:32.551Z] =================================================================================================================== 00:15:46.069 [2024-11-20T15:23:32.551Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:46.069 15:23:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:46.070 15:23:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:46.070 15:23:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81830' 00:15:46.070 15:23:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 81830 00:15:46.070 [2024-11-20 15:23:32.524854] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:46.070 15:23:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 81830 00:15:46.070 [2024-11-20 15:23:32.524987] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:46.070 [2024-11-20 15:23:32.525057] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:46.070 [2024-11-20 15:23:32.525072] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:15:46.649 [2024-11-20 15:23:32.941662] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:48.026 15:23:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:15:48.026 00:15:48.026 real 0m23.374s 00:15:48.026 user 0m29.712s 00:15:48.026 sys 0m3.207s 00:15:48.026 ************************************ 00:15:48.026 END TEST raid5f_rebuild_test_sb 00:15:48.026 ************************************ 00:15:48.026 15:23:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:48.026 15:23:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.026 15:23:34 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:15:48.026 15:23:34 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 4 false 00:15:48.026 15:23:34 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:15:48.026 15:23:34 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:48.026 15:23:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:48.026 ************************************ 00:15:48.026 START TEST raid5f_state_function_test 00:15:48.026 ************************************ 00:15:48.026 15:23:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 4 false 00:15:48.026 15:23:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:15:48.026 15:23:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:15:48.026 15:23:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:15:48.026 15:23:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:15:48.026 15:23:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:15:48.026 15:23:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:48.026 15:23:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:15:48.026 15:23:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:48.026 15:23:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:48.026 15:23:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:15:48.026 15:23:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:48.026 15:23:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:48.026 15:23:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:15:48.026 15:23:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:48.026 15:23:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:48.026 15:23:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:15:48.026 15:23:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:48.026 15:23:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:48.026 15:23:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:48.026 15:23:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:15:48.026 15:23:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:15:48.026 15:23:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:15:48.026 15:23:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:15:48.026 15:23:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:15:48.026 15:23:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:15:48.026 15:23:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:15:48.026 15:23:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:15:48.026 15:23:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:15:48.026 15:23:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:15:48.026 15:23:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=82588 00:15:48.026 15:23:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:15:48.026 Process raid pid: 82588 00:15:48.026 15:23:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 82588' 00:15:48.026 15:23:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 82588 00:15:48.026 15:23:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 82588 ']' 00:15:48.026 15:23:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:48.026 15:23:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:48.026 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:48.026 15:23:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:48.026 15:23:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:48.026 15:23:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.026 [2024-11-20 15:23:34.265494] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:15:48.026 [2024-11-20 15:23:34.265880] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:48.026 [2024-11-20 15:23:34.453088] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:48.285 [2024-11-20 15:23:34.576197] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:48.543 [2024-11-20 15:23:34.821945] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:48.543 [2024-11-20 15:23:34.822253] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:48.802 15:23:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:48.802 15:23:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:15:48.802 15:23:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:48.802 15:23:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.802 15:23:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.802 [2024-11-20 15:23:35.122896] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:48.802 [2024-11-20 15:23:35.122961] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:48.802 [2024-11-20 15:23:35.122974] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:48.802 [2024-11-20 15:23:35.122987] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:48.802 [2024-11-20 15:23:35.122996] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:48.802 [2024-11-20 15:23:35.123008] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:48.802 [2024-11-20 15:23:35.123016] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:48.802 [2024-11-20 15:23:35.123028] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:48.802 15:23:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.802 15:23:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:48.802 15:23:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:48.802 15:23:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:48.802 15:23:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:48.802 15:23:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:48.802 15:23:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:48.802 15:23:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:48.802 15:23:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:48.802 15:23:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:48.802 15:23:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:48.802 15:23:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:48.802 15:23:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:48.802 15:23:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.802 15:23:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.802 15:23:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.802 15:23:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:48.802 "name": "Existed_Raid", 00:15:48.802 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:48.802 "strip_size_kb": 64, 00:15:48.802 "state": "configuring", 00:15:48.802 "raid_level": "raid5f", 00:15:48.802 "superblock": false, 00:15:48.802 "num_base_bdevs": 4, 00:15:48.802 "num_base_bdevs_discovered": 0, 00:15:48.802 "num_base_bdevs_operational": 4, 00:15:48.802 "base_bdevs_list": [ 00:15:48.802 { 00:15:48.802 "name": "BaseBdev1", 00:15:48.802 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:48.802 "is_configured": false, 00:15:48.802 "data_offset": 0, 00:15:48.802 "data_size": 0 00:15:48.802 }, 00:15:48.802 { 00:15:48.802 "name": "BaseBdev2", 00:15:48.802 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:48.802 "is_configured": false, 00:15:48.802 "data_offset": 0, 00:15:48.802 "data_size": 0 00:15:48.802 }, 00:15:48.802 { 00:15:48.802 "name": "BaseBdev3", 00:15:48.802 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:48.802 "is_configured": false, 00:15:48.802 "data_offset": 0, 00:15:48.802 "data_size": 0 00:15:48.802 }, 00:15:48.802 { 00:15:48.802 "name": "BaseBdev4", 00:15:48.802 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:48.802 "is_configured": false, 00:15:48.802 "data_offset": 0, 00:15:48.802 "data_size": 0 00:15:48.802 } 00:15:48.802 ] 00:15:48.802 }' 00:15:48.802 15:23:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:48.802 15:23:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.061 15:23:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:49.061 15:23:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.061 15:23:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.061 [2024-11-20 15:23:35.530883] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:49.061 [2024-11-20 15:23:35.531111] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:15:49.061 15:23:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.061 15:23:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:49.061 15:23:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.061 15:23:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.319 [2024-11-20 15:23:35.542894] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:49.319 [2024-11-20 15:23:35.542952] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:49.319 [2024-11-20 15:23:35.542963] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:49.319 [2024-11-20 15:23:35.542977] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:49.319 [2024-11-20 15:23:35.542985] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:49.319 [2024-11-20 15:23:35.542999] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:49.319 [2024-11-20 15:23:35.543007] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:49.319 [2024-11-20 15:23:35.543019] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:49.319 15:23:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.319 15:23:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:49.319 15:23:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.319 15:23:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.319 [2024-11-20 15:23:35.592552] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:49.319 BaseBdev1 00:15:49.320 15:23:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.320 15:23:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:15:49.320 15:23:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:49.320 15:23:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:49.320 15:23:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:49.320 15:23:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:49.320 15:23:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:49.320 15:23:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:49.320 15:23:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.320 15:23:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.320 15:23:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.320 15:23:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:49.320 15:23:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.320 15:23:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.320 [ 00:15:49.320 { 00:15:49.320 "name": "BaseBdev1", 00:15:49.320 "aliases": [ 00:15:49.320 "1f9a07f7-becf-4dfa-8394-64a86d7a8a60" 00:15:49.320 ], 00:15:49.320 "product_name": "Malloc disk", 00:15:49.320 "block_size": 512, 00:15:49.320 "num_blocks": 65536, 00:15:49.320 "uuid": "1f9a07f7-becf-4dfa-8394-64a86d7a8a60", 00:15:49.320 "assigned_rate_limits": { 00:15:49.320 "rw_ios_per_sec": 0, 00:15:49.320 "rw_mbytes_per_sec": 0, 00:15:49.320 "r_mbytes_per_sec": 0, 00:15:49.320 "w_mbytes_per_sec": 0 00:15:49.320 }, 00:15:49.320 "claimed": true, 00:15:49.320 "claim_type": "exclusive_write", 00:15:49.320 "zoned": false, 00:15:49.320 "supported_io_types": { 00:15:49.320 "read": true, 00:15:49.320 "write": true, 00:15:49.320 "unmap": true, 00:15:49.320 "flush": true, 00:15:49.320 "reset": true, 00:15:49.320 "nvme_admin": false, 00:15:49.320 "nvme_io": false, 00:15:49.320 "nvme_io_md": false, 00:15:49.320 "write_zeroes": true, 00:15:49.320 "zcopy": true, 00:15:49.320 "get_zone_info": false, 00:15:49.320 "zone_management": false, 00:15:49.320 "zone_append": false, 00:15:49.320 "compare": false, 00:15:49.320 "compare_and_write": false, 00:15:49.320 "abort": true, 00:15:49.320 "seek_hole": false, 00:15:49.320 "seek_data": false, 00:15:49.320 "copy": true, 00:15:49.320 "nvme_iov_md": false 00:15:49.320 }, 00:15:49.320 "memory_domains": [ 00:15:49.320 { 00:15:49.320 "dma_device_id": "system", 00:15:49.320 "dma_device_type": 1 00:15:49.320 }, 00:15:49.320 { 00:15:49.320 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:49.320 "dma_device_type": 2 00:15:49.320 } 00:15:49.320 ], 00:15:49.320 "driver_specific": {} 00:15:49.320 } 00:15:49.320 ] 00:15:49.320 15:23:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.320 15:23:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:49.320 15:23:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:49.320 15:23:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:49.320 15:23:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:49.320 15:23:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:49.320 15:23:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:49.320 15:23:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:49.320 15:23:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:49.320 15:23:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:49.320 15:23:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:49.320 15:23:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:49.320 15:23:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:49.320 15:23:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:49.320 15:23:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.320 15:23:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.320 15:23:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.320 15:23:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:49.320 "name": "Existed_Raid", 00:15:49.320 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:49.320 "strip_size_kb": 64, 00:15:49.320 "state": "configuring", 00:15:49.320 "raid_level": "raid5f", 00:15:49.320 "superblock": false, 00:15:49.320 "num_base_bdevs": 4, 00:15:49.320 "num_base_bdevs_discovered": 1, 00:15:49.320 "num_base_bdevs_operational": 4, 00:15:49.320 "base_bdevs_list": [ 00:15:49.320 { 00:15:49.320 "name": "BaseBdev1", 00:15:49.320 "uuid": "1f9a07f7-becf-4dfa-8394-64a86d7a8a60", 00:15:49.320 "is_configured": true, 00:15:49.320 "data_offset": 0, 00:15:49.320 "data_size": 65536 00:15:49.320 }, 00:15:49.320 { 00:15:49.320 "name": "BaseBdev2", 00:15:49.320 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:49.320 "is_configured": false, 00:15:49.320 "data_offset": 0, 00:15:49.320 "data_size": 0 00:15:49.320 }, 00:15:49.320 { 00:15:49.320 "name": "BaseBdev3", 00:15:49.320 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:49.320 "is_configured": false, 00:15:49.320 "data_offset": 0, 00:15:49.320 "data_size": 0 00:15:49.320 }, 00:15:49.320 { 00:15:49.320 "name": "BaseBdev4", 00:15:49.320 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:49.320 "is_configured": false, 00:15:49.320 "data_offset": 0, 00:15:49.320 "data_size": 0 00:15:49.320 } 00:15:49.320 ] 00:15:49.320 }' 00:15:49.320 15:23:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:49.320 15:23:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.886 15:23:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:49.887 15:23:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.887 15:23:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.887 [2024-11-20 15:23:36.075943] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:49.887 [2024-11-20 15:23:36.076004] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:15:49.887 15:23:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.887 15:23:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:49.887 15:23:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.887 15:23:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.887 [2024-11-20 15:23:36.088010] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:49.887 [2024-11-20 15:23:36.090294] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:49.887 [2024-11-20 15:23:36.090500] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:49.887 [2024-11-20 15:23:36.090675] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:49.887 [2024-11-20 15:23:36.090751] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:49.887 [2024-11-20 15:23:36.090789] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:49.887 [2024-11-20 15:23:36.090910] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:49.887 15:23:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.887 15:23:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:15:49.887 15:23:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:49.887 15:23:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:49.887 15:23:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:49.887 15:23:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:49.887 15:23:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:49.887 15:23:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:49.887 15:23:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:49.887 15:23:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:49.887 15:23:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:49.887 15:23:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:49.887 15:23:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:49.887 15:23:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:49.887 15:23:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:49.887 15:23:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.887 15:23:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.887 15:23:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.887 15:23:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:49.887 "name": "Existed_Raid", 00:15:49.887 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:49.887 "strip_size_kb": 64, 00:15:49.887 "state": "configuring", 00:15:49.887 "raid_level": "raid5f", 00:15:49.887 "superblock": false, 00:15:49.887 "num_base_bdevs": 4, 00:15:49.887 "num_base_bdevs_discovered": 1, 00:15:49.887 "num_base_bdevs_operational": 4, 00:15:49.887 "base_bdevs_list": [ 00:15:49.887 { 00:15:49.887 "name": "BaseBdev1", 00:15:49.887 "uuid": "1f9a07f7-becf-4dfa-8394-64a86d7a8a60", 00:15:49.887 "is_configured": true, 00:15:49.887 "data_offset": 0, 00:15:49.887 "data_size": 65536 00:15:49.887 }, 00:15:49.887 { 00:15:49.887 "name": "BaseBdev2", 00:15:49.887 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:49.887 "is_configured": false, 00:15:49.887 "data_offset": 0, 00:15:49.887 "data_size": 0 00:15:49.887 }, 00:15:49.887 { 00:15:49.887 "name": "BaseBdev3", 00:15:49.887 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:49.887 "is_configured": false, 00:15:49.887 "data_offset": 0, 00:15:49.887 "data_size": 0 00:15:49.887 }, 00:15:49.887 { 00:15:49.887 "name": "BaseBdev4", 00:15:49.887 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:49.887 "is_configured": false, 00:15:49.887 "data_offset": 0, 00:15:49.887 "data_size": 0 00:15:49.887 } 00:15:49.887 ] 00:15:49.887 }' 00:15:49.887 15:23:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:49.887 15:23:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.145 15:23:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:50.145 15:23:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.145 15:23:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.145 [2024-11-20 15:23:36.547769] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:50.145 BaseBdev2 00:15:50.145 15:23:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.145 15:23:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:15:50.145 15:23:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:50.145 15:23:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:50.145 15:23:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:50.145 15:23:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:50.146 15:23:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:50.146 15:23:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:50.146 15:23:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.146 15:23:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.146 15:23:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.146 15:23:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:50.146 15:23:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.146 15:23:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.146 [ 00:15:50.146 { 00:15:50.146 "name": "BaseBdev2", 00:15:50.146 "aliases": [ 00:15:50.146 "67fd787b-01a0-48ff-b3ab-09e6ca8eafa0" 00:15:50.146 ], 00:15:50.146 "product_name": "Malloc disk", 00:15:50.146 "block_size": 512, 00:15:50.146 "num_blocks": 65536, 00:15:50.146 "uuid": "67fd787b-01a0-48ff-b3ab-09e6ca8eafa0", 00:15:50.146 "assigned_rate_limits": { 00:15:50.146 "rw_ios_per_sec": 0, 00:15:50.146 "rw_mbytes_per_sec": 0, 00:15:50.146 "r_mbytes_per_sec": 0, 00:15:50.146 "w_mbytes_per_sec": 0 00:15:50.146 }, 00:15:50.146 "claimed": true, 00:15:50.146 "claim_type": "exclusive_write", 00:15:50.146 "zoned": false, 00:15:50.146 "supported_io_types": { 00:15:50.146 "read": true, 00:15:50.146 "write": true, 00:15:50.146 "unmap": true, 00:15:50.146 "flush": true, 00:15:50.146 "reset": true, 00:15:50.146 "nvme_admin": false, 00:15:50.146 "nvme_io": false, 00:15:50.146 "nvme_io_md": false, 00:15:50.146 "write_zeroes": true, 00:15:50.146 "zcopy": true, 00:15:50.146 "get_zone_info": false, 00:15:50.146 "zone_management": false, 00:15:50.146 "zone_append": false, 00:15:50.146 "compare": false, 00:15:50.146 "compare_and_write": false, 00:15:50.146 "abort": true, 00:15:50.146 "seek_hole": false, 00:15:50.146 "seek_data": false, 00:15:50.146 "copy": true, 00:15:50.146 "nvme_iov_md": false 00:15:50.146 }, 00:15:50.146 "memory_domains": [ 00:15:50.146 { 00:15:50.146 "dma_device_id": "system", 00:15:50.146 "dma_device_type": 1 00:15:50.146 }, 00:15:50.146 { 00:15:50.146 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:50.146 "dma_device_type": 2 00:15:50.146 } 00:15:50.146 ], 00:15:50.146 "driver_specific": {} 00:15:50.146 } 00:15:50.146 ] 00:15:50.146 15:23:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.146 15:23:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:50.146 15:23:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:50.146 15:23:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:50.146 15:23:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:50.146 15:23:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:50.146 15:23:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:50.146 15:23:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:50.146 15:23:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:50.146 15:23:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:50.146 15:23:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:50.146 15:23:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:50.146 15:23:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:50.146 15:23:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:50.146 15:23:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:50.146 15:23:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.146 15:23:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.146 15:23:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:50.146 15:23:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.404 15:23:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:50.404 "name": "Existed_Raid", 00:15:50.404 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:50.404 "strip_size_kb": 64, 00:15:50.404 "state": "configuring", 00:15:50.404 "raid_level": "raid5f", 00:15:50.404 "superblock": false, 00:15:50.404 "num_base_bdevs": 4, 00:15:50.404 "num_base_bdevs_discovered": 2, 00:15:50.404 "num_base_bdevs_operational": 4, 00:15:50.404 "base_bdevs_list": [ 00:15:50.404 { 00:15:50.404 "name": "BaseBdev1", 00:15:50.404 "uuid": "1f9a07f7-becf-4dfa-8394-64a86d7a8a60", 00:15:50.404 "is_configured": true, 00:15:50.405 "data_offset": 0, 00:15:50.405 "data_size": 65536 00:15:50.405 }, 00:15:50.405 { 00:15:50.405 "name": "BaseBdev2", 00:15:50.405 "uuid": "67fd787b-01a0-48ff-b3ab-09e6ca8eafa0", 00:15:50.405 "is_configured": true, 00:15:50.405 "data_offset": 0, 00:15:50.405 "data_size": 65536 00:15:50.405 }, 00:15:50.405 { 00:15:50.405 "name": "BaseBdev3", 00:15:50.405 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:50.405 "is_configured": false, 00:15:50.405 "data_offset": 0, 00:15:50.405 "data_size": 0 00:15:50.405 }, 00:15:50.405 { 00:15:50.405 "name": "BaseBdev4", 00:15:50.405 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:50.405 "is_configured": false, 00:15:50.405 "data_offset": 0, 00:15:50.405 "data_size": 0 00:15:50.405 } 00:15:50.405 ] 00:15:50.405 }' 00:15:50.405 15:23:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:50.405 15:23:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.663 15:23:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:50.663 15:23:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.663 15:23:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.663 [2024-11-20 15:23:37.115075] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:50.663 BaseBdev3 00:15:50.663 15:23:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.663 15:23:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:15:50.663 15:23:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:15:50.663 15:23:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:50.663 15:23:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:50.663 15:23:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:50.663 15:23:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:50.663 15:23:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:50.663 15:23:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.663 15:23:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.663 15:23:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.663 15:23:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:50.663 15:23:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.663 15:23:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.663 [ 00:15:50.663 { 00:15:50.663 "name": "BaseBdev3", 00:15:50.663 "aliases": [ 00:15:50.663 "28bfee16-51c9-40cb-b58c-9f80468dd22c" 00:15:50.663 ], 00:15:50.663 "product_name": "Malloc disk", 00:15:50.663 "block_size": 512, 00:15:50.663 "num_blocks": 65536, 00:15:50.663 "uuid": "28bfee16-51c9-40cb-b58c-9f80468dd22c", 00:15:50.663 "assigned_rate_limits": { 00:15:50.922 "rw_ios_per_sec": 0, 00:15:50.922 "rw_mbytes_per_sec": 0, 00:15:50.922 "r_mbytes_per_sec": 0, 00:15:50.922 "w_mbytes_per_sec": 0 00:15:50.922 }, 00:15:50.922 "claimed": true, 00:15:50.922 "claim_type": "exclusive_write", 00:15:50.922 "zoned": false, 00:15:50.922 "supported_io_types": { 00:15:50.922 "read": true, 00:15:50.922 "write": true, 00:15:50.922 "unmap": true, 00:15:50.922 "flush": true, 00:15:50.922 "reset": true, 00:15:50.922 "nvme_admin": false, 00:15:50.922 "nvme_io": false, 00:15:50.922 "nvme_io_md": false, 00:15:50.922 "write_zeroes": true, 00:15:50.922 "zcopy": true, 00:15:50.922 "get_zone_info": false, 00:15:50.922 "zone_management": false, 00:15:50.922 "zone_append": false, 00:15:50.922 "compare": false, 00:15:50.922 "compare_and_write": false, 00:15:50.922 "abort": true, 00:15:50.922 "seek_hole": false, 00:15:50.922 "seek_data": false, 00:15:50.922 "copy": true, 00:15:50.922 "nvme_iov_md": false 00:15:50.922 }, 00:15:50.922 "memory_domains": [ 00:15:50.922 { 00:15:50.922 "dma_device_id": "system", 00:15:50.922 "dma_device_type": 1 00:15:50.922 }, 00:15:50.922 { 00:15:50.922 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:50.922 "dma_device_type": 2 00:15:50.922 } 00:15:50.922 ], 00:15:50.922 "driver_specific": {} 00:15:50.922 } 00:15:50.922 ] 00:15:50.922 15:23:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.922 15:23:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:50.922 15:23:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:50.922 15:23:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:50.922 15:23:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:50.922 15:23:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:50.922 15:23:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:50.922 15:23:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:50.922 15:23:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:50.922 15:23:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:50.922 15:23:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:50.922 15:23:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:50.922 15:23:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:50.922 15:23:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:50.922 15:23:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:50.922 15:23:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:50.922 15:23:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.922 15:23:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.922 15:23:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.922 15:23:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:50.922 "name": "Existed_Raid", 00:15:50.922 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:50.922 "strip_size_kb": 64, 00:15:50.922 "state": "configuring", 00:15:50.922 "raid_level": "raid5f", 00:15:50.922 "superblock": false, 00:15:50.922 "num_base_bdevs": 4, 00:15:50.922 "num_base_bdevs_discovered": 3, 00:15:50.922 "num_base_bdevs_operational": 4, 00:15:50.922 "base_bdevs_list": [ 00:15:50.922 { 00:15:50.922 "name": "BaseBdev1", 00:15:50.922 "uuid": "1f9a07f7-becf-4dfa-8394-64a86d7a8a60", 00:15:50.922 "is_configured": true, 00:15:50.922 "data_offset": 0, 00:15:50.922 "data_size": 65536 00:15:50.922 }, 00:15:50.922 { 00:15:50.922 "name": "BaseBdev2", 00:15:50.922 "uuid": "67fd787b-01a0-48ff-b3ab-09e6ca8eafa0", 00:15:50.922 "is_configured": true, 00:15:50.922 "data_offset": 0, 00:15:50.922 "data_size": 65536 00:15:50.922 }, 00:15:50.922 { 00:15:50.922 "name": "BaseBdev3", 00:15:50.922 "uuid": "28bfee16-51c9-40cb-b58c-9f80468dd22c", 00:15:50.922 "is_configured": true, 00:15:50.922 "data_offset": 0, 00:15:50.922 "data_size": 65536 00:15:50.922 }, 00:15:50.922 { 00:15:50.922 "name": "BaseBdev4", 00:15:50.922 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:50.922 "is_configured": false, 00:15:50.922 "data_offset": 0, 00:15:50.922 "data_size": 0 00:15:50.922 } 00:15:50.922 ] 00:15:50.922 }' 00:15:50.922 15:23:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:50.922 15:23:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.183 15:23:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:15:51.183 15:23:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.183 15:23:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.448 [2024-11-20 15:23:37.690579] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:51.448 [2024-11-20 15:23:37.690695] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:51.448 [2024-11-20 15:23:37.690708] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:15:51.448 [2024-11-20 15:23:37.690999] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:51.448 [2024-11-20 15:23:37.698603] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:51.448 BaseBdev4 00:15:51.448 [2024-11-20 15:23:37.698880] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:15:51.448 [2024-11-20 15:23:37.699265] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:51.448 15:23:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.448 15:23:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:15:51.448 15:23:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:15:51.448 15:23:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:51.448 15:23:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:51.448 15:23:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:51.448 15:23:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:51.448 15:23:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:51.448 15:23:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.448 15:23:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.448 15:23:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.448 15:23:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:15:51.448 15:23:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.448 15:23:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.448 [ 00:15:51.448 { 00:15:51.448 "name": "BaseBdev4", 00:15:51.448 "aliases": [ 00:15:51.448 "4fe5084a-80d4-4d40-8805-fb922a6fe541" 00:15:51.448 ], 00:15:51.448 "product_name": "Malloc disk", 00:15:51.448 "block_size": 512, 00:15:51.448 "num_blocks": 65536, 00:15:51.448 "uuid": "4fe5084a-80d4-4d40-8805-fb922a6fe541", 00:15:51.448 "assigned_rate_limits": { 00:15:51.448 "rw_ios_per_sec": 0, 00:15:51.448 "rw_mbytes_per_sec": 0, 00:15:51.448 "r_mbytes_per_sec": 0, 00:15:51.448 "w_mbytes_per_sec": 0 00:15:51.448 }, 00:15:51.448 "claimed": true, 00:15:51.448 "claim_type": "exclusive_write", 00:15:51.448 "zoned": false, 00:15:51.448 "supported_io_types": { 00:15:51.448 "read": true, 00:15:51.448 "write": true, 00:15:51.448 "unmap": true, 00:15:51.448 "flush": true, 00:15:51.448 "reset": true, 00:15:51.448 "nvme_admin": false, 00:15:51.448 "nvme_io": false, 00:15:51.448 "nvme_io_md": false, 00:15:51.448 "write_zeroes": true, 00:15:51.448 "zcopy": true, 00:15:51.448 "get_zone_info": false, 00:15:51.448 "zone_management": false, 00:15:51.448 "zone_append": false, 00:15:51.448 "compare": false, 00:15:51.448 "compare_and_write": false, 00:15:51.448 "abort": true, 00:15:51.448 "seek_hole": false, 00:15:51.448 "seek_data": false, 00:15:51.448 "copy": true, 00:15:51.448 "nvme_iov_md": false 00:15:51.448 }, 00:15:51.448 "memory_domains": [ 00:15:51.448 { 00:15:51.448 "dma_device_id": "system", 00:15:51.448 "dma_device_type": 1 00:15:51.448 }, 00:15:51.448 { 00:15:51.448 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:51.448 "dma_device_type": 2 00:15:51.448 } 00:15:51.448 ], 00:15:51.448 "driver_specific": {} 00:15:51.448 } 00:15:51.448 ] 00:15:51.448 15:23:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.448 15:23:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:51.448 15:23:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:51.448 15:23:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:51.448 15:23:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:15:51.448 15:23:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:51.448 15:23:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:51.448 15:23:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:51.448 15:23:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:51.448 15:23:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:51.448 15:23:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:51.448 15:23:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:51.448 15:23:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:51.448 15:23:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:51.448 15:23:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:51.448 15:23:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:51.448 15:23:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.448 15:23:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.448 15:23:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.448 15:23:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:51.448 "name": "Existed_Raid", 00:15:51.448 "uuid": "53bd9bf2-8dff-4603-bc70-ce56dc79144d", 00:15:51.448 "strip_size_kb": 64, 00:15:51.448 "state": "online", 00:15:51.448 "raid_level": "raid5f", 00:15:51.448 "superblock": false, 00:15:51.448 "num_base_bdevs": 4, 00:15:51.448 "num_base_bdevs_discovered": 4, 00:15:51.448 "num_base_bdevs_operational": 4, 00:15:51.448 "base_bdevs_list": [ 00:15:51.448 { 00:15:51.448 "name": "BaseBdev1", 00:15:51.448 "uuid": "1f9a07f7-becf-4dfa-8394-64a86d7a8a60", 00:15:51.448 "is_configured": true, 00:15:51.448 "data_offset": 0, 00:15:51.448 "data_size": 65536 00:15:51.448 }, 00:15:51.448 { 00:15:51.448 "name": "BaseBdev2", 00:15:51.448 "uuid": "67fd787b-01a0-48ff-b3ab-09e6ca8eafa0", 00:15:51.448 "is_configured": true, 00:15:51.448 "data_offset": 0, 00:15:51.448 "data_size": 65536 00:15:51.448 }, 00:15:51.448 { 00:15:51.448 "name": "BaseBdev3", 00:15:51.448 "uuid": "28bfee16-51c9-40cb-b58c-9f80468dd22c", 00:15:51.448 "is_configured": true, 00:15:51.448 "data_offset": 0, 00:15:51.449 "data_size": 65536 00:15:51.449 }, 00:15:51.449 { 00:15:51.449 "name": "BaseBdev4", 00:15:51.449 "uuid": "4fe5084a-80d4-4d40-8805-fb922a6fe541", 00:15:51.449 "is_configured": true, 00:15:51.449 "data_offset": 0, 00:15:51.449 "data_size": 65536 00:15:51.449 } 00:15:51.449 ] 00:15:51.449 }' 00:15:51.449 15:23:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:51.449 15:23:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.708 15:23:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:15:51.708 15:23:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:51.708 15:23:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:51.708 15:23:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:51.708 15:23:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:51.708 15:23:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:51.968 15:23:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:51.968 15:23:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.968 15:23:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.968 15:23:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:51.968 [2024-11-20 15:23:38.195162] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:51.968 15:23:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.968 15:23:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:51.968 "name": "Existed_Raid", 00:15:51.968 "aliases": [ 00:15:51.968 "53bd9bf2-8dff-4603-bc70-ce56dc79144d" 00:15:51.968 ], 00:15:51.968 "product_name": "Raid Volume", 00:15:51.968 "block_size": 512, 00:15:51.968 "num_blocks": 196608, 00:15:51.968 "uuid": "53bd9bf2-8dff-4603-bc70-ce56dc79144d", 00:15:51.968 "assigned_rate_limits": { 00:15:51.968 "rw_ios_per_sec": 0, 00:15:51.968 "rw_mbytes_per_sec": 0, 00:15:51.969 "r_mbytes_per_sec": 0, 00:15:51.969 "w_mbytes_per_sec": 0 00:15:51.969 }, 00:15:51.969 "claimed": false, 00:15:51.969 "zoned": false, 00:15:51.969 "supported_io_types": { 00:15:51.969 "read": true, 00:15:51.969 "write": true, 00:15:51.969 "unmap": false, 00:15:51.969 "flush": false, 00:15:51.969 "reset": true, 00:15:51.969 "nvme_admin": false, 00:15:51.969 "nvme_io": false, 00:15:51.969 "nvme_io_md": false, 00:15:51.969 "write_zeroes": true, 00:15:51.969 "zcopy": false, 00:15:51.969 "get_zone_info": false, 00:15:51.969 "zone_management": false, 00:15:51.969 "zone_append": false, 00:15:51.969 "compare": false, 00:15:51.969 "compare_and_write": false, 00:15:51.969 "abort": false, 00:15:51.969 "seek_hole": false, 00:15:51.969 "seek_data": false, 00:15:51.969 "copy": false, 00:15:51.969 "nvme_iov_md": false 00:15:51.969 }, 00:15:51.969 "driver_specific": { 00:15:51.969 "raid": { 00:15:51.969 "uuid": "53bd9bf2-8dff-4603-bc70-ce56dc79144d", 00:15:51.969 "strip_size_kb": 64, 00:15:51.969 "state": "online", 00:15:51.969 "raid_level": "raid5f", 00:15:51.969 "superblock": false, 00:15:51.969 "num_base_bdevs": 4, 00:15:51.969 "num_base_bdevs_discovered": 4, 00:15:51.969 "num_base_bdevs_operational": 4, 00:15:51.969 "base_bdevs_list": [ 00:15:51.969 { 00:15:51.969 "name": "BaseBdev1", 00:15:51.969 "uuid": "1f9a07f7-becf-4dfa-8394-64a86d7a8a60", 00:15:51.969 "is_configured": true, 00:15:51.969 "data_offset": 0, 00:15:51.969 "data_size": 65536 00:15:51.969 }, 00:15:51.969 { 00:15:51.969 "name": "BaseBdev2", 00:15:51.969 "uuid": "67fd787b-01a0-48ff-b3ab-09e6ca8eafa0", 00:15:51.969 "is_configured": true, 00:15:51.969 "data_offset": 0, 00:15:51.969 "data_size": 65536 00:15:51.969 }, 00:15:51.969 { 00:15:51.969 "name": "BaseBdev3", 00:15:51.969 "uuid": "28bfee16-51c9-40cb-b58c-9f80468dd22c", 00:15:51.969 "is_configured": true, 00:15:51.969 "data_offset": 0, 00:15:51.969 "data_size": 65536 00:15:51.969 }, 00:15:51.969 { 00:15:51.969 "name": "BaseBdev4", 00:15:51.969 "uuid": "4fe5084a-80d4-4d40-8805-fb922a6fe541", 00:15:51.969 "is_configured": true, 00:15:51.969 "data_offset": 0, 00:15:51.969 "data_size": 65536 00:15:51.969 } 00:15:51.969 ] 00:15:51.969 } 00:15:51.969 } 00:15:51.969 }' 00:15:51.969 15:23:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:51.969 15:23:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:15:51.969 BaseBdev2 00:15:51.969 BaseBdev3 00:15:51.969 BaseBdev4' 00:15:51.969 15:23:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:51.969 15:23:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:51.969 15:23:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:51.969 15:23:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:15:51.969 15:23:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:51.969 15:23:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.969 15:23:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.969 15:23:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.969 15:23:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:51.969 15:23:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:51.969 15:23:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:51.969 15:23:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:51.969 15:23:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.969 15:23:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.969 15:23:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:51.969 15:23:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.969 15:23:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:51.969 15:23:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:51.969 15:23:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:51.969 15:23:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:51.969 15:23:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.969 15:23:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.969 15:23:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:51.969 15:23:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.229 15:23:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:52.229 15:23:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:52.229 15:23:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:52.229 15:23:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:15:52.229 15:23:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.229 15:23:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.229 15:23:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:52.229 15:23:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.229 15:23:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:52.229 15:23:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:52.229 15:23:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:52.229 15:23:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.229 15:23:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.229 [2024-11-20 15:23:38.502904] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:52.229 15:23:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.229 15:23:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:15:52.229 15:23:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:15:52.229 15:23:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:52.229 15:23:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:15:52.229 15:23:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:15:52.229 15:23:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:15:52.229 15:23:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:52.229 15:23:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:52.229 15:23:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:52.229 15:23:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:52.229 15:23:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:52.229 15:23:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:52.230 15:23:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:52.230 15:23:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:52.230 15:23:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:52.230 15:23:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:52.230 15:23:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:52.230 15:23:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.230 15:23:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.230 15:23:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.230 15:23:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:52.230 "name": "Existed_Raid", 00:15:52.230 "uuid": "53bd9bf2-8dff-4603-bc70-ce56dc79144d", 00:15:52.230 "strip_size_kb": 64, 00:15:52.230 "state": "online", 00:15:52.230 "raid_level": "raid5f", 00:15:52.230 "superblock": false, 00:15:52.230 "num_base_bdevs": 4, 00:15:52.230 "num_base_bdevs_discovered": 3, 00:15:52.230 "num_base_bdevs_operational": 3, 00:15:52.230 "base_bdevs_list": [ 00:15:52.230 { 00:15:52.230 "name": null, 00:15:52.230 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:52.230 "is_configured": false, 00:15:52.230 "data_offset": 0, 00:15:52.230 "data_size": 65536 00:15:52.230 }, 00:15:52.230 { 00:15:52.230 "name": "BaseBdev2", 00:15:52.230 "uuid": "67fd787b-01a0-48ff-b3ab-09e6ca8eafa0", 00:15:52.230 "is_configured": true, 00:15:52.230 "data_offset": 0, 00:15:52.230 "data_size": 65536 00:15:52.230 }, 00:15:52.230 { 00:15:52.230 "name": "BaseBdev3", 00:15:52.230 "uuid": "28bfee16-51c9-40cb-b58c-9f80468dd22c", 00:15:52.230 "is_configured": true, 00:15:52.230 "data_offset": 0, 00:15:52.230 "data_size": 65536 00:15:52.230 }, 00:15:52.230 { 00:15:52.230 "name": "BaseBdev4", 00:15:52.230 "uuid": "4fe5084a-80d4-4d40-8805-fb922a6fe541", 00:15:52.230 "is_configured": true, 00:15:52.230 "data_offset": 0, 00:15:52.230 "data_size": 65536 00:15:52.230 } 00:15:52.230 ] 00:15:52.230 }' 00:15:52.230 15:23:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:52.230 15:23:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.798 15:23:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:15:52.798 15:23:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:52.798 15:23:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:52.798 15:23:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.798 15:23:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.798 15:23:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:52.798 15:23:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.798 15:23:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:52.798 15:23:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:52.798 15:23:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:15:52.798 15:23:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.798 15:23:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.798 [2024-11-20 15:23:39.095303] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:52.798 [2024-11-20 15:23:39.095613] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:52.798 [2024-11-20 15:23:39.193470] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:52.798 15:23:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.798 15:23:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:52.798 15:23:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:52.798 15:23:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:52.798 15:23:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.798 15:23:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:52.798 15:23:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.798 15:23:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.798 15:23:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:52.798 15:23:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:52.798 15:23:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:15:52.798 15:23:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.798 15:23:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.798 [2024-11-20 15:23:39.245460] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:53.057 15:23:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.057 15:23:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:53.057 15:23:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:53.057 15:23:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:53.057 15:23:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.057 15:23:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:53.057 15:23:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.057 15:23:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.057 15:23:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:53.057 15:23:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:53.057 15:23:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:15:53.057 15:23:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.057 15:23:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.057 [2024-11-20 15:23:39.396502] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:15:53.057 [2024-11-20 15:23:39.396558] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:15:53.057 15:23:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.057 15:23:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:53.057 15:23:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:53.057 15:23:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:53.058 15:23:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:15:53.058 15:23:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.058 15:23:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.058 15:23:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.058 15:23:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:15:53.058 15:23:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:15:53.058 15:23:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:15:53.058 15:23:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:15:53.058 15:23:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:53.058 15:23:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:53.058 15:23:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.058 15:23:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.317 BaseBdev2 00:15:53.317 15:23:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.317 15:23:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:15:53.318 15:23:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:53.318 15:23:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:53.318 15:23:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:53.318 15:23:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:53.318 15:23:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:53.318 15:23:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:53.318 15:23:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.318 15:23:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.318 15:23:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.318 15:23:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:53.318 15:23:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.318 15:23:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.318 [ 00:15:53.318 { 00:15:53.318 "name": "BaseBdev2", 00:15:53.318 "aliases": [ 00:15:53.318 "90b89b44-fc6e-43fa-9fa2-01f6485379f5" 00:15:53.318 ], 00:15:53.318 "product_name": "Malloc disk", 00:15:53.318 "block_size": 512, 00:15:53.318 "num_blocks": 65536, 00:15:53.318 "uuid": "90b89b44-fc6e-43fa-9fa2-01f6485379f5", 00:15:53.318 "assigned_rate_limits": { 00:15:53.318 "rw_ios_per_sec": 0, 00:15:53.318 "rw_mbytes_per_sec": 0, 00:15:53.318 "r_mbytes_per_sec": 0, 00:15:53.318 "w_mbytes_per_sec": 0 00:15:53.318 }, 00:15:53.318 "claimed": false, 00:15:53.318 "zoned": false, 00:15:53.318 "supported_io_types": { 00:15:53.318 "read": true, 00:15:53.318 "write": true, 00:15:53.318 "unmap": true, 00:15:53.318 "flush": true, 00:15:53.318 "reset": true, 00:15:53.318 "nvme_admin": false, 00:15:53.318 "nvme_io": false, 00:15:53.318 "nvme_io_md": false, 00:15:53.318 "write_zeroes": true, 00:15:53.318 "zcopy": true, 00:15:53.318 "get_zone_info": false, 00:15:53.318 "zone_management": false, 00:15:53.318 "zone_append": false, 00:15:53.318 "compare": false, 00:15:53.318 "compare_and_write": false, 00:15:53.318 "abort": true, 00:15:53.318 "seek_hole": false, 00:15:53.318 "seek_data": false, 00:15:53.318 "copy": true, 00:15:53.318 "nvme_iov_md": false 00:15:53.318 }, 00:15:53.318 "memory_domains": [ 00:15:53.318 { 00:15:53.318 "dma_device_id": "system", 00:15:53.318 "dma_device_type": 1 00:15:53.318 }, 00:15:53.318 { 00:15:53.318 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:53.318 "dma_device_type": 2 00:15:53.318 } 00:15:53.318 ], 00:15:53.318 "driver_specific": {} 00:15:53.318 } 00:15:53.318 ] 00:15:53.318 15:23:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.318 15:23:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:53.318 15:23:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:53.318 15:23:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:53.318 15:23:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:53.318 15:23:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.318 15:23:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.318 BaseBdev3 00:15:53.318 15:23:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.318 15:23:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:15:53.318 15:23:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:15:53.318 15:23:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:53.318 15:23:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:53.318 15:23:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:53.318 15:23:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:53.318 15:23:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:53.318 15:23:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.318 15:23:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.318 15:23:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.318 15:23:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:53.318 15:23:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.318 15:23:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.318 [ 00:15:53.318 { 00:15:53.318 "name": "BaseBdev3", 00:15:53.318 "aliases": [ 00:15:53.318 "79f3f431-42ac-4c9b-ba66-947798b981d9" 00:15:53.318 ], 00:15:53.318 "product_name": "Malloc disk", 00:15:53.318 "block_size": 512, 00:15:53.318 "num_blocks": 65536, 00:15:53.318 "uuid": "79f3f431-42ac-4c9b-ba66-947798b981d9", 00:15:53.318 "assigned_rate_limits": { 00:15:53.318 "rw_ios_per_sec": 0, 00:15:53.318 "rw_mbytes_per_sec": 0, 00:15:53.318 "r_mbytes_per_sec": 0, 00:15:53.318 "w_mbytes_per_sec": 0 00:15:53.318 }, 00:15:53.318 "claimed": false, 00:15:53.318 "zoned": false, 00:15:53.318 "supported_io_types": { 00:15:53.318 "read": true, 00:15:53.318 "write": true, 00:15:53.318 "unmap": true, 00:15:53.318 "flush": true, 00:15:53.318 "reset": true, 00:15:53.318 "nvme_admin": false, 00:15:53.318 "nvme_io": false, 00:15:53.318 "nvme_io_md": false, 00:15:53.318 "write_zeroes": true, 00:15:53.318 "zcopy": true, 00:15:53.318 "get_zone_info": false, 00:15:53.318 "zone_management": false, 00:15:53.318 "zone_append": false, 00:15:53.318 "compare": false, 00:15:53.318 "compare_and_write": false, 00:15:53.318 "abort": true, 00:15:53.318 "seek_hole": false, 00:15:53.318 "seek_data": false, 00:15:53.318 "copy": true, 00:15:53.318 "nvme_iov_md": false 00:15:53.318 }, 00:15:53.318 "memory_domains": [ 00:15:53.318 { 00:15:53.318 "dma_device_id": "system", 00:15:53.318 "dma_device_type": 1 00:15:53.318 }, 00:15:53.318 { 00:15:53.318 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:53.318 "dma_device_type": 2 00:15:53.318 } 00:15:53.318 ], 00:15:53.318 "driver_specific": {} 00:15:53.318 } 00:15:53.318 ] 00:15:53.318 15:23:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.318 15:23:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:53.318 15:23:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:53.318 15:23:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:53.318 15:23:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:15:53.318 15:23:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.318 15:23:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.318 BaseBdev4 00:15:53.318 15:23:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.318 15:23:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:15:53.318 15:23:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:15:53.318 15:23:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:53.318 15:23:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:53.318 15:23:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:53.318 15:23:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:53.318 15:23:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:53.318 15:23:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.318 15:23:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.318 15:23:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.318 15:23:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:15:53.318 15:23:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.318 15:23:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.318 [ 00:15:53.318 { 00:15:53.318 "name": "BaseBdev4", 00:15:53.318 "aliases": [ 00:15:53.318 "9ac1b396-1168-4dfd-9036-a8c14214c11c" 00:15:53.318 ], 00:15:53.318 "product_name": "Malloc disk", 00:15:53.318 "block_size": 512, 00:15:53.318 "num_blocks": 65536, 00:15:53.318 "uuid": "9ac1b396-1168-4dfd-9036-a8c14214c11c", 00:15:53.318 "assigned_rate_limits": { 00:15:53.318 "rw_ios_per_sec": 0, 00:15:53.318 "rw_mbytes_per_sec": 0, 00:15:53.318 "r_mbytes_per_sec": 0, 00:15:53.318 "w_mbytes_per_sec": 0 00:15:53.318 }, 00:15:53.318 "claimed": false, 00:15:53.318 "zoned": false, 00:15:53.318 "supported_io_types": { 00:15:53.318 "read": true, 00:15:53.318 "write": true, 00:15:53.318 "unmap": true, 00:15:53.318 "flush": true, 00:15:53.318 "reset": true, 00:15:53.318 "nvme_admin": false, 00:15:53.318 "nvme_io": false, 00:15:53.318 "nvme_io_md": false, 00:15:53.318 "write_zeroes": true, 00:15:53.318 "zcopy": true, 00:15:53.318 "get_zone_info": false, 00:15:53.318 "zone_management": false, 00:15:53.318 "zone_append": false, 00:15:53.318 "compare": false, 00:15:53.318 "compare_and_write": false, 00:15:53.318 "abort": true, 00:15:53.319 "seek_hole": false, 00:15:53.319 "seek_data": false, 00:15:53.319 "copy": true, 00:15:53.319 "nvme_iov_md": false 00:15:53.319 }, 00:15:53.319 "memory_domains": [ 00:15:53.319 { 00:15:53.319 "dma_device_id": "system", 00:15:53.319 "dma_device_type": 1 00:15:53.319 }, 00:15:53.319 { 00:15:53.319 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:53.319 "dma_device_type": 2 00:15:53.319 } 00:15:53.319 ], 00:15:53.319 "driver_specific": {} 00:15:53.319 } 00:15:53.319 ] 00:15:53.319 15:23:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.319 15:23:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:53.319 15:23:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:53.319 15:23:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:53.319 15:23:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:53.319 15:23:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.319 15:23:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.319 [2024-11-20 15:23:39.777265] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:53.319 [2024-11-20 15:23:39.777329] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:53.319 [2024-11-20 15:23:39.777363] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:53.319 [2024-11-20 15:23:39.779722] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:53.319 [2024-11-20 15:23:39.779793] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:53.319 15:23:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.319 15:23:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:53.319 15:23:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:53.319 15:23:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:53.319 15:23:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:53.319 15:23:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:53.319 15:23:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:53.319 15:23:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:53.319 15:23:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:53.319 15:23:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:53.319 15:23:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:53.319 15:23:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:53.319 15:23:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:53.319 15:23:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.319 15:23:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.579 15:23:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.579 15:23:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:53.579 "name": "Existed_Raid", 00:15:53.579 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:53.579 "strip_size_kb": 64, 00:15:53.579 "state": "configuring", 00:15:53.579 "raid_level": "raid5f", 00:15:53.579 "superblock": false, 00:15:53.579 "num_base_bdevs": 4, 00:15:53.579 "num_base_bdevs_discovered": 3, 00:15:53.579 "num_base_bdevs_operational": 4, 00:15:53.579 "base_bdevs_list": [ 00:15:53.579 { 00:15:53.579 "name": "BaseBdev1", 00:15:53.579 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:53.579 "is_configured": false, 00:15:53.579 "data_offset": 0, 00:15:53.579 "data_size": 0 00:15:53.579 }, 00:15:53.579 { 00:15:53.579 "name": "BaseBdev2", 00:15:53.579 "uuid": "90b89b44-fc6e-43fa-9fa2-01f6485379f5", 00:15:53.579 "is_configured": true, 00:15:53.579 "data_offset": 0, 00:15:53.579 "data_size": 65536 00:15:53.579 }, 00:15:53.579 { 00:15:53.579 "name": "BaseBdev3", 00:15:53.579 "uuid": "79f3f431-42ac-4c9b-ba66-947798b981d9", 00:15:53.579 "is_configured": true, 00:15:53.579 "data_offset": 0, 00:15:53.579 "data_size": 65536 00:15:53.579 }, 00:15:53.579 { 00:15:53.579 "name": "BaseBdev4", 00:15:53.579 "uuid": "9ac1b396-1168-4dfd-9036-a8c14214c11c", 00:15:53.579 "is_configured": true, 00:15:53.579 "data_offset": 0, 00:15:53.579 "data_size": 65536 00:15:53.579 } 00:15:53.579 ] 00:15:53.579 }' 00:15:53.579 15:23:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:53.579 15:23:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.839 15:23:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:15:53.839 15:23:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.839 15:23:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.839 [2024-11-20 15:23:40.160712] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:53.839 15:23:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.839 15:23:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:53.839 15:23:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:53.839 15:23:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:53.839 15:23:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:53.839 15:23:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:53.839 15:23:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:53.839 15:23:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:53.839 15:23:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:53.839 15:23:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:53.839 15:23:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:53.839 15:23:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:53.839 15:23:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:53.839 15:23:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.839 15:23:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.839 15:23:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.839 15:23:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:53.839 "name": "Existed_Raid", 00:15:53.839 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:53.839 "strip_size_kb": 64, 00:15:53.839 "state": "configuring", 00:15:53.839 "raid_level": "raid5f", 00:15:53.839 "superblock": false, 00:15:53.839 "num_base_bdevs": 4, 00:15:53.839 "num_base_bdevs_discovered": 2, 00:15:53.839 "num_base_bdevs_operational": 4, 00:15:53.839 "base_bdevs_list": [ 00:15:53.839 { 00:15:53.839 "name": "BaseBdev1", 00:15:53.839 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:53.839 "is_configured": false, 00:15:53.839 "data_offset": 0, 00:15:53.839 "data_size": 0 00:15:53.839 }, 00:15:53.839 { 00:15:53.839 "name": null, 00:15:53.839 "uuid": "90b89b44-fc6e-43fa-9fa2-01f6485379f5", 00:15:53.839 "is_configured": false, 00:15:53.839 "data_offset": 0, 00:15:53.839 "data_size": 65536 00:15:53.839 }, 00:15:53.839 { 00:15:53.839 "name": "BaseBdev3", 00:15:53.840 "uuid": "79f3f431-42ac-4c9b-ba66-947798b981d9", 00:15:53.840 "is_configured": true, 00:15:53.840 "data_offset": 0, 00:15:53.840 "data_size": 65536 00:15:53.840 }, 00:15:53.840 { 00:15:53.840 "name": "BaseBdev4", 00:15:53.840 "uuid": "9ac1b396-1168-4dfd-9036-a8c14214c11c", 00:15:53.840 "is_configured": true, 00:15:53.840 "data_offset": 0, 00:15:53.840 "data_size": 65536 00:15:53.840 } 00:15:53.840 ] 00:15:53.840 }' 00:15:53.840 15:23:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:53.840 15:23:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.407 15:23:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:54.407 15:23:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.407 15:23:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.407 15:23:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:54.407 15:23:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.407 15:23:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:15:54.407 15:23:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:54.407 15:23:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.407 15:23:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.407 [2024-11-20 15:23:40.671327] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:54.407 BaseBdev1 00:15:54.407 15:23:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.407 15:23:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:15:54.407 15:23:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:54.407 15:23:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:54.407 15:23:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:54.407 15:23:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:54.407 15:23:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:54.407 15:23:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:54.407 15:23:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.407 15:23:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.407 15:23:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.407 15:23:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:54.407 15:23:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.407 15:23:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.407 [ 00:15:54.407 { 00:15:54.407 "name": "BaseBdev1", 00:15:54.407 "aliases": [ 00:15:54.407 "be62cc98-60c4-4a11-8f09-e416cb0c8bd1" 00:15:54.407 ], 00:15:54.407 "product_name": "Malloc disk", 00:15:54.407 "block_size": 512, 00:15:54.407 "num_blocks": 65536, 00:15:54.407 "uuid": "be62cc98-60c4-4a11-8f09-e416cb0c8bd1", 00:15:54.407 "assigned_rate_limits": { 00:15:54.407 "rw_ios_per_sec": 0, 00:15:54.407 "rw_mbytes_per_sec": 0, 00:15:54.407 "r_mbytes_per_sec": 0, 00:15:54.407 "w_mbytes_per_sec": 0 00:15:54.407 }, 00:15:54.407 "claimed": true, 00:15:54.407 "claim_type": "exclusive_write", 00:15:54.407 "zoned": false, 00:15:54.407 "supported_io_types": { 00:15:54.407 "read": true, 00:15:54.407 "write": true, 00:15:54.407 "unmap": true, 00:15:54.407 "flush": true, 00:15:54.407 "reset": true, 00:15:54.407 "nvme_admin": false, 00:15:54.407 "nvme_io": false, 00:15:54.407 "nvme_io_md": false, 00:15:54.407 "write_zeroes": true, 00:15:54.407 "zcopy": true, 00:15:54.407 "get_zone_info": false, 00:15:54.407 "zone_management": false, 00:15:54.407 "zone_append": false, 00:15:54.407 "compare": false, 00:15:54.407 "compare_and_write": false, 00:15:54.407 "abort": true, 00:15:54.407 "seek_hole": false, 00:15:54.407 "seek_data": false, 00:15:54.407 "copy": true, 00:15:54.407 "nvme_iov_md": false 00:15:54.407 }, 00:15:54.407 "memory_domains": [ 00:15:54.407 { 00:15:54.407 "dma_device_id": "system", 00:15:54.407 "dma_device_type": 1 00:15:54.407 }, 00:15:54.407 { 00:15:54.407 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:54.407 "dma_device_type": 2 00:15:54.407 } 00:15:54.407 ], 00:15:54.407 "driver_specific": {} 00:15:54.407 } 00:15:54.407 ] 00:15:54.407 15:23:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.407 15:23:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:54.407 15:23:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:54.407 15:23:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:54.407 15:23:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:54.407 15:23:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:54.407 15:23:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:54.407 15:23:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:54.407 15:23:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:54.407 15:23:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:54.407 15:23:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:54.407 15:23:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:54.407 15:23:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:54.407 15:23:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:54.407 15:23:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.407 15:23:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.407 15:23:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.407 15:23:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:54.408 "name": "Existed_Raid", 00:15:54.408 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:54.408 "strip_size_kb": 64, 00:15:54.408 "state": "configuring", 00:15:54.408 "raid_level": "raid5f", 00:15:54.408 "superblock": false, 00:15:54.408 "num_base_bdevs": 4, 00:15:54.408 "num_base_bdevs_discovered": 3, 00:15:54.408 "num_base_bdevs_operational": 4, 00:15:54.408 "base_bdevs_list": [ 00:15:54.408 { 00:15:54.408 "name": "BaseBdev1", 00:15:54.408 "uuid": "be62cc98-60c4-4a11-8f09-e416cb0c8bd1", 00:15:54.408 "is_configured": true, 00:15:54.408 "data_offset": 0, 00:15:54.408 "data_size": 65536 00:15:54.408 }, 00:15:54.408 { 00:15:54.408 "name": null, 00:15:54.408 "uuid": "90b89b44-fc6e-43fa-9fa2-01f6485379f5", 00:15:54.408 "is_configured": false, 00:15:54.408 "data_offset": 0, 00:15:54.408 "data_size": 65536 00:15:54.408 }, 00:15:54.408 { 00:15:54.408 "name": "BaseBdev3", 00:15:54.408 "uuid": "79f3f431-42ac-4c9b-ba66-947798b981d9", 00:15:54.408 "is_configured": true, 00:15:54.408 "data_offset": 0, 00:15:54.408 "data_size": 65536 00:15:54.408 }, 00:15:54.408 { 00:15:54.408 "name": "BaseBdev4", 00:15:54.408 "uuid": "9ac1b396-1168-4dfd-9036-a8c14214c11c", 00:15:54.408 "is_configured": true, 00:15:54.408 "data_offset": 0, 00:15:54.408 "data_size": 65536 00:15:54.408 } 00:15:54.408 ] 00:15:54.408 }' 00:15:54.408 15:23:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:54.408 15:23:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.977 15:23:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:54.977 15:23:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.977 15:23:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.977 15:23:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:54.977 15:23:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.977 15:23:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:15:54.977 15:23:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:15:54.977 15:23:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.977 15:23:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.977 [2024-11-20 15:23:41.214897] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:54.977 15:23:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.977 15:23:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:54.977 15:23:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:54.977 15:23:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:54.977 15:23:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:54.977 15:23:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:54.977 15:23:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:54.977 15:23:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:54.977 15:23:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:54.977 15:23:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:54.977 15:23:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:54.977 15:23:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:54.977 15:23:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:54.977 15:23:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.977 15:23:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.977 15:23:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.977 15:23:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:54.977 "name": "Existed_Raid", 00:15:54.977 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:54.978 "strip_size_kb": 64, 00:15:54.978 "state": "configuring", 00:15:54.978 "raid_level": "raid5f", 00:15:54.978 "superblock": false, 00:15:54.978 "num_base_bdevs": 4, 00:15:54.978 "num_base_bdevs_discovered": 2, 00:15:54.978 "num_base_bdevs_operational": 4, 00:15:54.978 "base_bdevs_list": [ 00:15:54.978 { 00:15:54.978 "name": "BaseBdev1", 00:15:54.978 "uuid": "be62cc98-60c4-4a11-8f09-e416cb0c8bd1", 00:15:54.978 "is_configured": true, 00:15:54.978 "data_offset": 0, 00:15:54.978 "data_size": 65536 00:15:54.978 }, 00:15:54.978 { 00:15:54.978 "name": null, 00:15:54.978 "uuid": "90b89b44-fc6e-43fa-9fa2-01f6485379f5", 00:15:54.978 "is_configured": false, 00:15:54.978 "data_offset": 0, 00:15:54.978 "data_size": 65536 00:15:54.978 }, 00:15:54.978 { 00:15:54.978 "name": null, 00:15:54.978 "uuid": "79f3f431-42ac-4c9b-ba66-947798b981d9", 00:15:54.978 "is_configured": false, 00:15:54.978 "data_offset": 0, 00:15:54.978 "data_size": 65536 00:15:54.978 }, 00:15:54.978 { 00:15:54.978 "name": "BaseBdev4", 00:15:54.978 "uuid": "9ac1b396-1168-4dfd-9036-a8c14214c11c", 00:15:54.978 "is_configured": true, 00:15:54.978 "data_offset": 0, 00:15:54.978 "data_size": 65536 00:15:54.978 } 00:15:54.978 ] 00:15:54.978 }' 00:15:54.978 15:23:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:54.978 15:23:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.237 15:23:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:55.237 15:23:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.237 15:23:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.237 15:23:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:55.237 15:23:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.237 15:23:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:15:55.237 15:23:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:15:55.237 15:23:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.237 15:23:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.237 [2024-11-20 15:23:41.678864] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:55.237 15:23:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.237 15:23:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:55.237 15:23:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:55.237 15:23:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:55.237 15:23:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:55.237 15:23:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:55.237 15:23:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:55.237 15:23:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:55.237 15:23:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:55.237 15:23:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:55.237 15:23:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:55.237 15:23:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:55.237 15:23:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:55.238 15:23:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.238 15:23:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.497 15:23:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.497 15:23:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:55.497 "name": "Existed_Raid", 00:15:55.497 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:55.497 "strip_size_kb": 64, 00:15:55.497 "state": "configuring", 00:15:55.497 "raid_level": "raid5f", 00:15:55.497 "superblock": false, 00:15:55.497 "num_base_bdevs": 4, 00:15:55.497 "num_base_bdevs_discovered": 3, 00:15:55.497 "num_base_bdevs_operational": 4, 00:15:55.497 "base_bdevs_list": [ 00:15:55.497 { 00:15:55.497 "name": "BaseBdev1", 00:15:55.497 "uuid": "be62cc98-60c4-4a11-8f09-e416cb0c8bd1", 00:15:55.497 "is_configured": true, 00:15:55.497 "data_offset": 0, 00:15:55.497 "data_size": 65536 00:15:55.497 }, 00:15:55.497 { 00:15:55.497 "name": null, 00:15:55.497 "uuid": "90b89b44-fc6e-43fa-9fa2-01f6485379f5", 00:15:55.497 "is_configured": false, 00:15:55.497 "data_offset": 0, 00:15:55.497 "data_size": 65536 00:15:55.497 }, 00:15:55.497 { 00:15:55.497 "name": "BaseBdev3", 00:15:55.497 "uuid": "79f3f431-42ac-4c9b-ba66-947798b981d9", 00:15:55.497 "is_configured": true, 00:15:55.497 "data_offset": 0, 00:15:55.497 "data_size": 65536 00:15:55.497 }, 00:15:55.497 { 00:15:55.497 "name": "BaseBdev4", 00:15:55.497 "uuid": "9ac1b396-1168-4dfd-9036-a8c14214c11c", 00:15:55.497 "is_configured": true, 00:15:55.497 "data_offset": 0, 00:15:55.497 "data_size": 65536 00:15:55.497 } 00:15:55.497 ] 00:15:55.497 }' 00:15:55.497 15:23:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:55.497 15:23:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.756 15:23:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:55.756 15:23:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.756 15:23:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.756 15:23:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:55.756 15:23:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.756 15:23:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:15:55.756 15:23:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:55.756 15:23:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.756 15:23:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.756 [2024-11-20 15:23:42.126915] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:55.756 15:23:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.756 15:23:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:55.756 15:23:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:55.756 15:23:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:55.756 15:23:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:55.756 15:23:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:55.756 15:23:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:55.756 15:23:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:55.756 15:23:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:55.756 15:23:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:55.756 15:23:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:55.756 15:23:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:55.756 15:23:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:55.756 15:23:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.756 15:23:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.016 15:23:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.016 15:23:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:56.016 "name": "Existed_Raid", 00:15:56.016 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:56.016 "strip_size_kb": 64, 00:15:56.016 "state": "configuring", 00:15:56.016 "raid_level": "raid5f", 00:15:56.016 "superblock": false, 00:15:56.016 "num_base_bdevs": 4, 00:15:56.016 "num_base_bdevs_discovered": 2, 00:15:56.016 "num_base_bdevs_operational": 4, 00:15:56.016 "base_bdevs_list": [ 00:15:56.016 { 00:15:56.016 "name": null, 00:15:56.016 "uuid": "be62cc98-60c4-4a11-8f09-e416cb0c8bd1", 00:15:56.016 "is_configured": false, 00:15:56.016 "data_offset": 0, 00:15:56.016 "data_size": 65536 00:15:56.016 }, 00:15:56.016 { 00:15:56.016 "name": null, 00:15:56.016 "uuid": "90b89b44-fc6e-43fa-9fa2-01f6485379f5", 00:15:56.016 "is_configured": false, 00:15:56.016 "data_offset": 0, 00:15:56.017 "data_size": 65536 00:15:56.017 }, 00:15:56.017 { 00:15:56.017 "name": "BaseBdev3", 00:15:56.017 "uuid": "79f3f431-42ac-4c9b-ba66-947798b981d9", 00:15:56.017 "is_configured": true, 00:15:56.017 "data_offset": 0, 00:15:56.017 "data_size": 65536 00:15:56.017 }, 00:15:56.017 { 00:15:56.017 "name": "BaseBdev4", 00:15:56.017 "uuid": "9ac1b396-1168-4dfd-9036-a8c14214c11c", 00:15:56.017 "is_configured": true, 00:15:56.017 "data_offset": 0, 00:15:56.017 "data_size": 65536 00:15:56.017 } 00:15:56.017 ] 00:15:56.017 }' 00:15:56.017 15:23:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:56.017 15:23:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.277 15:23:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:56.277 15:23:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:56.277 15:23:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.277 15:23:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.277 15:23:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.277 15:23:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:15:56.277 15:23:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:15:56.277 15:23:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.277 15:23:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.277 [2024-11-20 15:23:42.731514] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:56.277 15:23:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.277 15:23:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:56.277 15:23:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:56.277 15:23:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:56.277 15:23:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:56.277 15:23:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:56.277 15:23:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:56.277 15:23:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:56.277 15:23:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:56.277 15:23:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:56.277 15:23:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:56.277 15:23:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:56.277 15:23:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:56.277 15:23:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.277 15:23:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.536 15:23:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.536 15:23:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:56.536 "name": "Existed_Raid", 00:15:56.536 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:56.536 "strip_size_kb": 64, 00:15:56.536 "state": "configuring", 00:15:56.536 "raid_level": "raid5f", 00:15:56.536 "superblock": false, 00:15:56.536 "num_base_bdevs": 4, 00:15:56.536 "num_base_bdevs_discovered": 3, 00:15:56.536 "num_base_bdevs_operational": 4, 00:15:56.536 "base_bdevs_list": [ 00:15:56.536 { 00:15:56.536 "name": null, 00:15:56.536 "uuid": "be62cc98-60c4-4a11-8f09-e416cb0c8bd1", 00:15:56.536 "is_configured": false, 00:15:56.536 "data_offset": 0, 00:15:56.536 "data_size": 65536 00:15:56.536 }, 00:15:56.536 { 00:15:56.536 "name": "BaseBdev2", 00:15:56.536 "uuid": "90b89b44-fc6e-43fa-9fa2-01f6485379f5", 00:15:56.536 "is_configured": true, 00:15:56.536 "data_offset": 0, 00:15:56.536 "data_size": 65536 00:15:56.536 }, 00:15:56.536 { 00:15:56.536 "name": "BaseBdev3", 00:15:56.536 "uuid": "79f3f431-42ac-4c9b-ba66-947798b981d9", 00:15:56.536 "is_configured": true, 00:15:56.536 "data_offset": 0, 00:15:56.536 "data_size": 65536 00:15:56.536 }, 00:15:56.536 { 00:15:56.536 "name": "BaseBdev4", 00:15:56.536 "uuid": "9ac1b396-1168-4dfd-9036-a8c14214c11c", 00:15:56.536 "is_configured": true, 00:15:56.536 "data_offset": 0, 00:15:56.536 "data_size": 65536 00:15:56.536 } 00:15:56.536 ] 00:15:56.536 }' 00:15:56.536 15:23:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:56.536 15:23:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.795 15:23:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:56.795 15:23:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:56.795 15:23:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.795 15:23:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.795 15:23:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.795 15:23:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:15:56.795 15:23:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:56.795 15:23:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.795 15:23:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:15:56.795 15:23:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.795 15:23:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.795 15:23:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u be62cc98-60c4-4a11-8f09-e416cb0c8bd1 00:15:56.795 15:23:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.795 15:23:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.054 [2024-11-20 15:23:43.286606] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:15:57.054 [2024-11-20 15:23:43.286699] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:15:57.054 [2024-11-20 15:23:43.286710] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:15:57.054 [2024-11-20 15:23:43.287008] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:15:57.054 [2024-11-20 15:23:43.294338] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:15:57.054 [2024-11-20 15:23:43.294375] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:15:57.054 [2024-11-20 15:23:43.294697] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:57.054 NewBaseBdev 00:15:57.054 15:23:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.054 15:23:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:15:57.054 15:23:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:15:57.054 15:23:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:57.054 15:23:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:57.054 15:23:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:57.054 15:23:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:57.054 15:23:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:57.054 15:23:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.054 15:23:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.054 15:23:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.054 15:23:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:15:57.054 15:23:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.054 15:23:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.054 [ 00:15:57.054 { 00:15:57.054 "name": "NewBaseBdev", 00:15:57.054 "aliases": [ 00:15:57.054 "be62cc98-60c4-4a11-8f09-e416cb0c8bd1" 00:15:57.054 ], 00:15:57.054 "product_name": "Malloc disk", 00:15:57.054 "block_size": 512, 00:15:57.054 "num_blocks": 65536, 00:15:57.054 "uuid": "be62cc98-60c4-4a11-8f09-e416cb0c8bd1", 00:15:57.054 "assigned_rate_limits": { 00:15:57.054 "rw_ios_per_sec": 0, 00:15:57.054 "rw_mbytes_per_sec": 0, 00:15:57.054 "r_mbytes_per_sec": 0, 00:15:57.054 "w_mbytes_per_sec": 0 00:15:57.054 }, 00:15:57.054 "claimed": true, 00:15:57.054 "claim_type": "exclusive_write", 00:15:57.054 "zoned": false, 00:15:57.054 "supported_io_types": { 00:15:57.054 "read": true, 00:15:57.054 "write": true, 00:15:57.054 "unmap": true, 00:15:57.054 "flush": true, 00:15:57.054 "reset": true, 00:15:57.054 "nvme_admin": false, 00:15:57.054 "nvme_io": false, 00:15:57.054 "nvme_io_md": false, 00:15:57.054 "write_zeroes": true, 00:15:57.054 "zcopy": true, 00:15:57.054 "get_zone_info": false, 00:15:57.054 "zone_management": false, 00:15:57.054 "zone_append": false, 00:15:57.054 "compare": false, 00:15:57.054 "compare_and_write": false, 00:15:57.054 "abort": true, 00:15:57.055 "seek_hole": false, 00:15:57.055 "seek_data": false, 00:15:57.055 "copy": true, 00:15:57.055 "nvme_iov_md": false 00:15:57.055 }, 00:15:57.055 "memory_domains": [ 00:15:57.055 { 00:15:57.055 "dma_device_id": "system", 00:15:57.055 "dma_device_type": 1 00:15:57.055 }, 00:15:57.055 { 00:15:57.055 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:57.055 "dma_device_type": 2 00:15:57.055 } 00:15:57.055 ], 00:15:57.055 "driver_specific": {} 00:15:57.055 } 00:15:57.055 ] 00:15:57.055 15:23:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.055 15:23:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:57.055 15:23:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:15:57.055 15:23:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:57.055 15:23:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:57.055 15:23:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:57.055 15:23:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:57.055 15:23:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:57.055 15:23:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:57.055 15:23:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:57.055 15:23:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:57.055 15:23:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:57.055 15:23:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:57.055 15:23:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:57.055 15:23:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.055 15:23:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.055 15:23:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.055 15:23:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:57.055 "name": "Existed_Raid", 00:15:57.055 "uuid": "46b0b213-b2e5-4c5e-b5e3-c1e34e800e4a", 00:15:57.055 "strip_size_kb": 64, 00:15:57.055 "state": "online", 00:15:57.055 "raid_level": "raid5f", 00:15:57.055 "superblock": false, 00:15:57.055 "num_base_bdevs": 4, 00:15:57.055 "num_base_bdevs_discovered": 4, 00:15:57.055 "num_base_bdevs_operational": 4, 00:15:57.055 "base_bdevs_list": [ 00:15:57.055 { 00:15:57.055 "name": "NewBaseBdev", 00:15:57.055 "uuid": "be62cc98-60c4-4a11-8f09-e416cb0c8bd1", 00:15:57.055 "is_configured": true, 00:15:57.055 "data_offset": 0, 00:15:57.055 "data_size": 65536 00:15:57.055 }, 00:15:57.055 { 00:15:57.055 "name": "BaseBdev2", 00:15:57.055 "uuid": "90b89b44-fc6e-43fa-9fa2-01f6485379f5", 00:15:57.055 "is_configured": true, 00:15:57.055 "data_offset": 0, 00:15:57.055 "data_size": 65536 00:15:57.055 }, 00:15:57.055 { 00:15:57.055 "name": "BaseBdev3", 00:15:57.055 "uuid": "79f3f431-42ac-4c9b-ba66-947798b981d9", 00:15:57.055 "is_configured": true, 00:15:57.055 "data_offset": 0, 00:15:57.055 "data_size": 65536 00:15:57.055 }, 00:15:57.055 { 00:15:57.055 "name": "BaseBdev4", 00:15:57.055 "uuid": "9ac1b396-1168-4dfd-9036-a8c14214c11c", 00:15:57.055 "is_configured": true, 00:15:57.055 "data_offset": 0, 00:15:57.055 "data_size": 65536 00:15:57.055 } 00:15:57.055 ] 00:15:57.055 }' 00:15:57.055 15:23:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:57.055 15:23:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.312 15:23:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:15:57.312 15:23:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:57.312 15:23:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:57.312 15:23:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:57.312 15:23:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:57.312 15:23:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:57.312 15:23:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:57.312 15:23:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:57.312 15:23:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.312 15:23:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.312 [2024-11-20 15:23:43.787078] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:57.570 15:23:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.570 15:23:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:57.570 "name": "Existed_Raid", 00:15:57.570 "aliases": [ 00:15:57.570 "46b0b213-b2e5-4c5e-b5e3-c1e34e800e4a" 00:15:57.570 ], 00:15:57.570 "product_name": "Raid Volume", 00:15:57.570 "block_size": 512, 00:15:57.570 "num_blocks": 196608, 00:15:57.570 "uuid": "46b0b213-b2e5-4c5e-b5e3-c1e34e800e4a", 00:15:57.570 "assigned_rate_limits": { 00:15:57.570 "rw_ios_per_sec": 0, 00:15:57.570 "rw_mbytes_per_sec": 0, 00:15:57.570 "r_mbytes_per_sec": 0, 00:15:57.570 "w_mbytes_per_sec": 0 00:15:57.570 }, 00:15:57.570 "claimed": false, 00:15:57.570 "zoned": false, 00:15:57.570 "supported_io_types": { 00:15:57.570 "read": true, 00:15:57.570 "write": true, 00:15:57.570 "unmap": false, 00:15:57.570 "flush": false, 00:15:57.570 "reset": true, 00:15:57.570 "nvme_admin": false, 00:15:57.570 "nvme_io": false, 00:15:57.570 "nvme_io_md": false, 00:15:57.570 "write_zeroes": true, 00:15:57.570 "zcopy": false, 00:15:57.570 "get_zone_info": false, 00:15:57.570 "zone_management": false, 00:15:57.570 "zone_append": false, 00:15:57.570 "compare": false, 00:15:57.571 "compare_and_write": false, 00:15:57.571 "abort": false, 00:15:57.571 "seek_hole": false, 00:15:57.571 "seek_data": false, 00:15:57.571 "copy": false, 00:15:57.571 "nvme_iov_md": false 00:15:57.571 }, 00:15:57.571 "driver_specific": { 00:15:57.571 "raid": { 00:15:57.571 "uuid": "46b0b213-b2e5-4c5e-b5e3-c1e34e800e4a", 00:15:57.571 "strip_size_kb": 64, 00:15:57.571 "state": "online", 00:15:57.571 "raid_level": "raid5f", 00:15:57.571 "superblock": false, 00:15:57.571 "num_base_bdevs": 4, 00:15:57.571 "num_base_bdevs_discovered": 4, 00:15:57.571 "num_base_bdevs_operational": 4, 00:15:57.571 "base_bdevs_list": [ 00:15:57.571 { 00:15:57.571 "name": "NewBaseBdev", 00:15:57.571 "uuid": "be62cc98-60c4-4a11-8f09-e416cb0c8bd1", 00:15:57.571 "is_configured": true, 00:15:57.571 "data_offset": 0, 00:15:57.571 "data_size": 65536 00:15:57.571 }, 00:15:57.571 { 00:15:57.571 "name": "BaseBdev2", 00:15:57.571 "uuid": "90b89b44-fc6e-43fa-9fa2-01f6485379f5", 00:15:57.571 "is_configured": true, 00:15:57.571 "data_offset": 0, 00:15:57.571 "data_size": 65536 00:15:57.571 }, 00:15:57.571 { 00:15:57.571 "name": "BaseBdev3", 00:15:57.571 "uuid": "79f3f431-42ac-4c9b-ba66-947798b981d9", 00:15:57.571 "is_configured": true, 00:15:57.571 "data_offset": 0, 00:15:57.571 "data_size": 65536 00:15:57.571 }, 00:15:57.571 { 00:15:57.571 "name": "BaseBdev4", 00:15:57.571 "uuid": "9ac1b396-1168-4dfd-9036-a8c14214c11c", 00:15:57.571 "is_configured": true, 00:15:57.571 "data_offset": 0, 00:15:57.571 "data_size": 65536 00:15:57.571 } 00:15:57.571 ] 00:15:57.571 } 00:15:57.571 } 00:15:57.571 }' 00:15:57.571 15:23:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:57.571 15:23:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:15:57.571 BaseBdev2 00:15:57.571 BaseBdev3 00:15:57.571 BaseBdev4' 00:15:57.571 15:23:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:57.571 15:23:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:57.571 15:23:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:57.571 15:23:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:57.571 15:23:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:15:57.571 15:23:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.571 15:23:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.571 15:23:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.571 15:23:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:57.571 15:23:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:57.571 15:23:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:57.571 15:23:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:57.571 15:23:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:57.571 15:23:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.571 15:23:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.571 15:23:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.571 15:23:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:57.571 15:23:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:57.571 15:23:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:57.571 15:23:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:57.571 15:23:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:57.571 15:23:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.571 15:23:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.571 15:23:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.571 15:23:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:57.571 15:23:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:57.571 15:23:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:57.571 15:23:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:15:57.571 15:23:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.571 15:23:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:57.571 15:23:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.571 15:23:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.830 15:23:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:57.830 15:23:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:57.830 15:23:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:57.830 15:23:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.830 15:23:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.830 [2024-11-20 15:23:44.074845] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:57.830 [2024-11-20 15:23:44.074886] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:57.830 [2024-11-20 15:23:44.074972] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:57.830 [2024-11-20 15:23:44.075266] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:57.830 [2024-11-20 15:23:44.075279] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:15:57.830 15:23:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.830 15:23:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 82588 00:15:57.830 15:23:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 82588 ']' 00:15:57.830 15:23:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # kill -0 82588 00:15:57.830 15:23:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # uname 00:15:57.830 15:23:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:57.830 15:23:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82588 00:15:57.830 15:23:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:57.830 killing process with pid 82588 00:15:57.830 15:23:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:57.830 15:23:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82588' 00:15:57.830 15:23:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@973 -- # kill 82588 00:15:57.830 [2024-11-20 15:23:44.128157] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:57.830 15:23:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@978 -- # wait 82588 00:15:58.088 [2024-11-20 15:23:44.525693] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:59.463 15:23:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:15:59.463 00:15:59.463 real 0m11.523s 00:15:59.463 user 0m18.167s 00:15:59.463 sys 0m2.406s 00:15:59.463 15:23:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:59.463 ************************************ 00:15:59.463 END TEST raid5f_state_function_test 00:15:59.463 ************************************ 00:15:59.463 15:23:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.463 15:23:45 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 4 true 00:15:59.463 15:23:45 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:15:59.463 15:23:45 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:59.463 15:23:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:59.463 ************************************ 00:15:59.464 START TEST raid5f_state_function_test_sb 00:15:59.464 ************************************ 00:15:59.464 15:23:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 4 true 00:15:59.464 15:23:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:15:59.464 15:23:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:15:59.464 15:23:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:15:59.464 15:23:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:15:59.464 15:23:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:15:59.464 15:23:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:59.464 15:23:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:15:59.464 15:23:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:59.464 15:23:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:59.464 15:23:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:15:59.464 15:23:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:59.464 15:23:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:59.464 15:23:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:15:59.464 15:23:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:59.464 15:23:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:59.464 15:23:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:15:59.464 15:23:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:59.464 15:23:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:59.464 15:23:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:59.464 15:23:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:15:59.464 15:23:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:15:59.464 15:23:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:15:59.464 15:23:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:15:59.464 15:23:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:15:59.464 15:23:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:15:59.464 15:23:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:15:59.464 15:23:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:15:59.464 15:23:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:15:59.464 15:23:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:15:59.464 15:23:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=83256 00:15:59.464 15:23:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:15:59.464 15:23:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 83256' 00:15:59.464 Process raid pid: 83256 00:15:59.464 15:23:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 83256 00:15:59.464 15:23:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 83256 ']' 00:15:59.464 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:59.464 15:23:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:59.464 15:23:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:59.464 15:23:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:59.464 15:23:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:59.464 15:23:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:59.464 [2024-11-20 15:23:45.878415] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:15:59.464 [2024-11-20 15:23:45.878547] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:59.721 [2024-11-20 15:23:46.058998] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:59.721 [2024-11-20 15:23:46.190008] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:59.978 [2024-11-20 15:23:46.406997] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:59.978 [2024-11-20 15:23:46.407252] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:00.543 15:23:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:00.543 15:23:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:16:00.543 15:23:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:00.543 15:23:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.543 15:23:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:00.543 [2024-11-20 15:23:46.730546] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:00.543 [2024-11-20 15:23:46.730613] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:00.543 [2024-11-20 15:23:46.730626] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:00.543 [2024-11-20 15:23:46.730640] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:00.543 [2024-11-20 15:23:46.730648] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:00.543 [2024-11-20 15:23:46.730697] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:00.543 [2024-11-20 15:23:46.730706] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:00.543 [2024-11-20 15:23:46.730719] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:00.543 15:23:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.543 15:23:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:00.543 15:23:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:00.543 15:23:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:00.543 15:23:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:00.543 15:23:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:00.543 15:23:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:00.543 15:23:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:00.543 15:23:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:00.543 15:23:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:00.543 15:23:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:00.543 15:23:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:00.543 15:23:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.543 15:23:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:00.543 15:23:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:00.543 15:23:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.543 15:23:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:00.543 "name": "Existed_Raid", 00:16:00.543 "uuid": "e7a6d17b-6f5e-4be3-a1a8-b233920b3a38", 00:16:00.543 "strip_size_kb": 64, 00:16:00.543 "state": "configuring", 00:16:00.543 "raid_level": "raid5f", 00:16:00.543 "superblock": true, 00:16:00.543 "num_base_bdevs": 4, 00:16:00.543 "num_base_bdevs_discovered": 0, 00:16:00.543 "num_base_bdevs_operational": 4, 00:16:00.543 "base_bdevs_list": [ 00:16:00.543 { 00:16:00.543 "name": "BaseBdev1", 00:16:00.543 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:00.543 "is_configured": false, 00:16:00.543 "data_offset": 0, 00:16:00.543 "data_size": 0 00:16:00.543 }, 00:16:00.543 { 00:16:00.543 "name": "BaseBdev2", 00:16:00.543 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:00.543 "is_configured": false, 00:16:00.543 "data_offset": 0, 00:16:00.543 "data_size": 0 00:16:00.543 }, 00:16:00.543 { 00:16:00.543 "name": "BaseBdev3", 00:16:00.543 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:00.543 "is_configured": false, 00:16:00.543 "data_offset": 0, 00:16:00.543 "data_size": 0 00:16:00.543 }, 00:16:00.543 { 00:16:00.543 "name": "BaseBdev4", 00:16:00.543 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:00.543 "is_configured": false, 00:16:00.543 "data_offset": 0, 00:16:00.543 "data_size": 0 00:16:00.543 } 00:16:00.544 ] 00:16:00.544 }' 00:16:00.544 15:23:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:00.544 15:23:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:00.802 15:23:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:00.802 15:23:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.802 15:23:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:00.802 [2024-11-20 15:23:47.129901] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:00.802 [2024-11-20 15:23:47.129945] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:16:00.802 15:23:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.802 15:23:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:00.802 15:23:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.802 15:23:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:00.802 [2024-11-20 15:23:47.141907] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:00.802 [2024-11-20 15:23:47.141963] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:00.802 [2024-11-20 15:23:47.141974] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:00.802 [2024-11-20 15:23:47.141987] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:00.802 [2024-11-20 15:23:47.141995] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:00.802 [2024-11-20 15:23:47.142007] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:00.802 [2024-11-20 15:23:47.142015] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:00.802 [2024-11-20 15:23:47.142026] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:00.802 15:23:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.802 15:23:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:00.802 15:23:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.802 15:23:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:00.802 [2024-11-20 15:23:47.194287] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:00.802 BaseBdev1 00:16:00.802 15:23:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.802 15:23:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:16:00.802 15:23:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:16:00.802 15:23:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:00.802 15:23:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:00.802 15:23:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:00.802 15:23:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:00.802 15:23:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:00.802 15:23:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.802 15:23:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:00.802 15:23:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.802 15:23:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:00.802 15:23:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.802 15:23:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:00.802 [ 00:16:00.802 { 00:16:00.802 "name": "BaseBdev1", 00:16:00.802 "aliases": [ 00:16:00.802 "012598ff-f5c6-4044-b4ba-8df57aaf033a" 00:16:00.802 ], 00:16:00.802 "product_name": "Malloc disk", 00:16:00.802 "block_size": 512, 00:16:00.802 "num_blocks": 65536, 00:16:00.802 "uuid": "012598ff-f5c6-4044-b4ba-8df57aaf033a", 00:16:00.802 "assigned_rate_limits": { 00:16:00.802 "rw_ios_per_sec": 0, 00:16:00.802 "rw_mbytes_per_sec": 0, 00:16:00.802 "r_mbytes_per_sec": 0, 00:16:00.802 "w_mbytes_per_sec": 0 00:16:00.802 }, 00:16:00.802 "claimed": true, 00:16:00.802 "claim_type": "exclusive_write", 00:16:00.802 "zoned": false, 00:16:00.802 "supported_io_types": { 00:16:00.802 "read": true, 00:16:00.802 "write": true, 00:16:00.802 "unmap": true, 00:16:00.802 "flush": true, 00:16:00.802 "reset": true, 00:16:00.802 "nvme_admin": false, 00:16:00.802 "nvme_io": false, 00:16:00.802 "nvme_io_md": false, 00:16:00.802 "write_zeroes": true, 00:16:00.802 "zcopy": true, 00:16:00.802 "get_zone_info": false, 00:16:00.802 "zone_management": false, 00:16:00.802 "zone_append": false, 00:16:00.802 "compare": false, 00:16:00.802 "compare_and_write": false, 00:16:00.802 "abort": true, 00:16:00.802 "seek_hole": false, 00:16:00.802 "seek_data": false, 00:16:00.802 "copy": true, 00:16:00.802 "nvme_iov_md": false 00:16:00.802 }, 00:16:00.802 "memory_domains": [ 00:16:00.802 { 00:16:00.802 "dma_device_id": "system", 00:16:00.802 "dma_device_type": 1 00:16:00.802 }, 00:16:00.802 { 00:16:00.802 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:00.802 "dma_device_type": 2 00:16:00.802 } 00:16:00.802 ], 00:16:00.802 "driver_specific": {} 00:16:00.802 } 00:16:00.802 ] 00:16:00.802 15:23:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.802 15:23:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:00.802 15:23:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:00.802 15:23:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:00.802 15:23:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:00.802 15:23:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:00.802 15:23:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:00.802 15:23:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:00.802 15:23:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:00.802 15:23:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:00.802 15:23:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:00.802 15:23:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:00.802 15:23:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:00.802 15:23:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.802 15:23:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:00.802 15:23:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:00.802 15:23:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.061 15:23:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:01.061 "name": "Existed_Raid", 00:16:01.061 "uuid": "de674f72-c8ea-42be-974c-1c162f08a667", 00:16:01.061 "strip_size_kb": 64, 00:16:01.061 "state": "configuring", 00:16:01.061 "raid_level": "raid5f", 00:16:01.061 "superblock": true, 00:16:01.061 "num_base_bdevs": 4, 00:16:01.061 "num_base_bdevs_discovered": 1, 00:16:01.061 "num_base_bdevs_operational": 4, 00:16:01.061 "base_bdevs_list": [ 00:16:01.061 { 00:16:01.061 "name": "BaseBdev1", 00:16:01.061 "uuid": "012598ff-f5c6-4044-b4ba-8df57aaf033a", 00:16:01.061 "is_configured": true, 00:16:01.061 "data_offset": 2048, 00:16:01.061 "data_size": 63488 00:16:01.061 }, 00:16:01.061 { 00:16:01.061 "name": "BaseBdev2", 00:16:01.061 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:01.061 "is_configured": false, 00:16:01.061 "data_offset": 0, 00:16:01.061 "data_size": 0 00:16:01.061 }, 00:16:01.061 { 00:16:01.061 "name": "BaseBdev3", 00:16:01.061 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:01.061 "is_configured": false, 00:16:01.061 "data_offset": 0, 00:16:01.061 "data_size": 0 00:16:01.061 }, 00:16:01.061 { 00:16:01.061 "name": "BaseBdev4", 00:16:01.061 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:01.061 "is_configured": false, 00:16:01.061 "data_offset": 0, 00:16:01.061 "data_size": 0 00:16:01.061 } 00:16:01.061 ] 00:16:01.061 }' 00:16:01.061 15:23:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:01.061 15:23:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:01.319 15:23:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:01.319 15:23:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.319 15:23:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:01.319 [2024-11-20 15:23:47.649814] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:01.319 [2024-11-20 15:23:47.649876] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:16:01.319 15:23:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.319 15:23:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:01.319 15:23:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.319 15:23:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:01.319 [2024-11-20 15:23:47.661896] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:01.319 [2024-11-20 15:23:47.664232] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:01.319 [2024-11-20 15:23:47.664419] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:01.319 [2024-11-20 15:23:47.664556] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:01.319 [2024-11-20 15:23:47.664605] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:01.319 [2024-11-20 15:23:47.664634] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:01.319 [2024-11-20 15:23:47.664739] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:01.319 15:23:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.319 15:23:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:16:01.319 15:23:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:01.319 15:23:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:01.319 15:23:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:01.319 15:23:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:01.319 15:23:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:01.319 15:23:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:01.319 15:23:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:01.319 15:23:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:01.319 15:23:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:01.319 15:23:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:01.319 15:23:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:01.319 15:23:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:01.319 15:23:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:01.319 15:23:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.319 15:23:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:01.320 15:23:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.320 15:23:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:01.320 "name": "Existed_Raid", 00:16:01.320 "uuid": "01cd3e0a-ef92-4bbd-aebc-3f2f8334fe28", 00:16:01.320 "strip_size_kb": 64, 00:16:01.320 "state": "configuring", 00:16:01.320 "raid_level": "raid5f", 00:16:01.320 "superblock": true, 00:16:01.320 "num_base_bdevs": 4, 00:16:01.320 "num_base_bdevs_discovered": 1, 00:16:01.320 "num_base_bdevs_operational": 4, 00:16:01.320 "base_bdevs_list": [ 00:16:01.320 { 00:16:01.320 "name": "BaseBdev1", 00:16:01.320 "uuid": "012598ff-f5c6-4044-b4ba-8df57aaf033a", 00:16:01.320 "is_configured": true, 00:16:01.320 "data_offset": 2048, 00:16:01.320 "data_size": 63488 00:16:01.320 }, 00:16:01.320 { 00:16:01.320 "name": "BaseBdev2", 00:16:01.320 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:01.320 "is_configured": false, 00:16:01.320 "data_offset": 0, 00:16:01.320 "data_size": 0 00:16:01.320 }, 00:16:01.320 { 00:16:01.320 "name": "BaseBdev3", 00:16:01.320 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:01.320 "is_configured": false, 00:16:01.320 "data_offset": 0, 00:16:01.320 "data_size": 0 00:16:01.320 }, 00:16:01.320 { 00:16:01.320 "name": "BaseBdev4", 00:16:01.320 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:01.320 "is_configured": false, 00:16:01.320 "data_offset": 0, 00:16:01.320 "data_size": 0 00:16:01.320 } 00:16:01.320 ] 00:16:01.320 }' 00:16:01.320 15:23:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:01.320 15:23:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:01.886 15:23:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:01.886 15:23:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.886 15:23:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:01.886 [2024-11-20 15:23:48.119722] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:01.886 BaseBdev2 00:16:01.886 15:23:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.886 15:23:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:16:01.886 15:23:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:16:01.886 15:23:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:01.886 15:23:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:01.886 15:23:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:01.886 15:23:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:01.886 15:23:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:01.886 15:23:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.886 15:23:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:01.886 15:23:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.886 15:23:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:01.886 15:23:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.886 15:23:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:01.886 [ 00:16:01.886 { 00:16:01.886 "name": "BaseBdev2", 00:16:01.886 "aliases": [ 00:16:01.886 "fabafb5e-6cb0-4db5-a84e-b67820da565a" 00:16:01.886 ], 00:16:01.886 "product_name": "Malloc disk", 00:16:01.886 "block_size": 512, 00:16:01.886 "num_blocks": 65536, 00:16:01.886 "uuid": "fabafb5e-6cb0-4db5-a84e-b67820da565a", 00:16:01.886 "assigned_rate_limits": { 00:16:01.886 "rw_ios_per_sec": 0, 00:16:01.886 "rw_mbytes_per_sec": 0, 00:16:01.886 "r_mbytes_per_sec": 0, 00:16:01.886 "w_mbytes_per_sec": 0 00:16:01.886 }, 00:16:01.886 "claimed": true, 00:16:01.886 "claim_type": "exclusive_write", 00:16:01.886 "zoned": false, 00:16:01.886 "supported_io_types": { 00:16:01.886 "read": true, 00:16:01.886 "write": true, 00:16:01.886 "unmap": true, 00:16:01.886 "flush": true, 00:16:01.886 "reset": true, 00:16:01.887 "nvme_admin": false, 00:16:01.887 "nvme_io": false, 00:16:01.887 "nvme_io_md": false, 00:16:01.887 "write_zeroes": true, 00:16:01.887 "zcopy": true, 00:16:01.887 "get_zone_info": false, 00:16:01.887 "zone_management": false, 00:16:01.887 "zone_append": false, 00:16:01.887 "compare": false, 00:16:01.887 "compare_and_write": false, 00:16:01.887 "abort": true, 00:16:01.887 "seek_hole": false, 00:16:01.887 "seek_data": false, 00:16:01.887 "copy": true, 00:16:01.887 "nvme_iov_md": false 00:16:01.887 }, 00:16:01.887 "memory_domains": [ 00:16:01.887 { 00:16:01.887 "dma_device_id": "system", 00:16:01.887 "dma_device_type": 1 00:16:01.887 }, 00:16:01.887 { 00:16:01.887 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:01.887 "dma_device_type": 2 00:16:01.887 } 00:16:01.887 ], 00:16:01.887 "driver_specific": {} 00:16:01.887 } 00:16:01.887 ] 00:16:01.887 15:23:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.887 15:23:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:01.887 15:23:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:01.887 15:23:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:01.887 15:23:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:01.887 15:23:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:01.887 15:23:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:01.887 15:23:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:01.887 15:23:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:01.887 15:23:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:01.887 15:23:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:01.887 15:23:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:01.887 15:23:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:01.887 15:23:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:01.887 15:23:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:01.887 15:23:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:01.887 15:23:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.887 15:23:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:01.887 15:23:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.887 15:23:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:01.887 "name": "Existed_Raid", 00:16:01.887 "uuid": "01cd3e0a-ef92-4bbd-aebc-3f2f8334fe28", 00:16:01.887 "strip_size_kb": 64, 00:16:01.887 "state": "configuring", 00:16:01.887 "raid_level": "raid5f", 00:16:01.887 "superblock": true, 00:16:01.887 "num_base_bdevs": 4, 00:16:01.887 "num_base_bdevs_discovered": 2, 00:16:01.887 "num_base_bdevs_operational": 4, 00:16:01.887 "base_bdevs_list": [ 00:16:01.887 { 00:16:01.887 "name": "BaseBdev1", 00:16:01.887 "uuid": "012598ff-f5c6-4044-b4ba-8df57aaf033a", 00:16:01.887 "is_configured": true, 00:16:01.887 "data_offset": 2048, 00:16:01.887 "data_size": 63488 00:16:01.887 }, 00:16:01.887 { 00:16:01.887 "name": "BaseBdev2", 00:16:01.887 "uuid": "fabafb5e-6cb0-4db5-a84e-b67820da565a", 00:16:01.887 "is_configured": true, 00:16:01.887 "data_offset": 2048, 00:16:01.887 "data_size": 63488 00:16:01.887 }, 00:16:01.887 { 00:16:01.887 "name": "BaseBdev3", 00:16:01.887 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:01.887 "is_configured": false, 00:16:01.887 "data_offset": 0, 00:16:01.887 "data_size": 0 00:16:01.887 }, 00:16:01.887 { 00:16:01.887 "name": "BaseBdev4", 00:16:01.887 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:01.887 "is_configured": false, 00:16:01.887 "data_offset": 0, 00:16:01.887 "data_size": 0 00:16:01.887 } 00:16:01.887 ] 00:16:01.887 }' 00:16:01.887 15:23:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:01.887 15:23:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:02.145 15:23:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:02.145 15:23:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.146 15:23:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:02.405 [2024-11-20 15:23:48.637858] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:02.405 BaseBdev3 00:16:02.405 15:23:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.405 15:23:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:16:02.405 15:23:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:16:02.405 15:23:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:02.405 15:23:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:02.405 15:23:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:02.405 15:23:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:02.405 15:23:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:02.405 15:23:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.405 15:23:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:02.405 15:23:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.405 15:23:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:02.405 15:23:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.405 15:23:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:02.405 [ 00:16:02.405 { 00:16:02.405 "name": "BaseBdev3", 00:16:02.405 "aliases": [ 00:16:02.405 "9fa0e693-d15c-444c-8019-6cd4db4195fa" 00:16:02.405 ], 00:16:02.405 "product_name": "Malloc disk", 00:16:02.405 "block_size": 512, 00:16:02.405 "num_blocks": 65536, 00:16:02.405 "uuid": "9fa0e693-d15c-444c-8019-6cd4db4195fa", 00:16:02.405 "assigned_rate_limits": { 00:16:02.405 "rw_ios_per_sec": 0, 00:16:02.405 "rw_mbytes_per_sec": 0, 00:16:02.405 "r_mbytes_per_sec": 0, 00:16:02.405 "w_mbytes_per_sec": 0 00:16:02.405 }, 00:16:02.405 "claimed": true, 00:16:02.405 "claim_type": "exclusive_write", 00:16:02.405 "zoned": false, 00:16:02.405 "supported_io_types": { 00:16:02.405 "read": true, 00:16:02.405 "write": true, 00:16:02.405 "unmap": true, 00:16:02.405 "flush": true, 00:16:02.405 "reset": true, 00:16:02.405 "nvme_admin": false, 00:16:02.405 "nvme_io": false, 00:16:02.405 "nvme_io_md": false, 00:16:02.405 "write_zeroes": true, 00:16:02.405 "zcopy": true, 00:16:02.405 "get_zone_info": false, 00:16:02.405 "zone_management": false, 00:16:02.405 "zone_append": false, 00:16:02.405 "compare": false, 00:16:02.405 "compare_and_write": false, 00:16:02.405 "abort": true, 00:16:02.405 "seek_hole": false, 00:16:02.405 "seek_data": false, 00:16:02.405 "copy": true, 00:16:02.405 "nvme_iov_md": false 00:16:02.405 }, 00:16:02.405 "memory_domains": [ 00:16:02.405 { 00:16:02.405 "dma_device_id": "system", 00:16:02.405 "dma_device_type": 1 00:16:02.405 }, 00:16:02.405 { 00:16:02.405 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:02.405 "dma_device_type": 2 00:16:02.405 } 00:16:02.405 ], 00:16:02.405 "driver_specific": {} 00:16:02.405 } 00:16:02.405 ] 00:16:02.405 15:23:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.405 15:23:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:02.405 15:23:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:02.405 15:23:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:02.405 15:23:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:02.405 15:23:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:02.405 15:23:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:02.405 15:23:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:02.405 15:23:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:02.405 15:23:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:02.405 15:23:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:02.405 15:23:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:02.405 15:23:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:02.405 15:23:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:02.405 15:23:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:02.405 15:23:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:02.405 15:23:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.405 15:23:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:02.405 15:23:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.405 15:23:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:02.405 "name": "Existed_Raid", 00:16:02.405 "uuid": "01cd3e0a-ef92-4bbd-aebc-3f2f8334fe28", 00:16:02.405 "strip_size_kb": 64, 00:16:02.405 "state": "configuring", 00:16:02.405 "raid_level": "raid5f", 00:16:02.405 "superblock": true, 00:16:02.405 "num_base_bdevs": 4, 00:16:02.405 "num_base_bdevs_discovered": 3, 00:16:02.405 "num_base_bdevs_operational": 4, 00:16:02.405 "base_bdevs_list": [ 00:16:02.405 { 00:16:02.405 "name": "BaseBdev1", 00:16:02.405 "uuid": "012598ff-f5c6-4044-b4ba-8df57aaf033a", 00:16:02.405 "is_configured": true, 00:16:02.405 "data_offset": 2048, 00:16:02.405 "data_size": 63488 00:16:02.405 }, 00:16:02.405 { 00:16:02.405 "name": "BaseBdev2", 00:16:02.405 "uuid": "fabafb5e-6cb0-4db5-a84e-b67820da565a", 00:16:02.405 "is_configured": true, 00:16:02.405 "data_offset": 2048, 00:16:02.405 "data_size": 63488 00:16:02.405 }, 00:16:02.405 { 00:16:02.405 "name": "BaseBdev3", 00:16:02.405 "uuid": "9fa0e693-d15c-444c-8019-6cd4db4195fa", 00:16:02.405 "is_configured": true, 00:16:02.405 "data_offset": 2048, 00:16:02.405 "data_size": 63488 00:16:02.405 }, 00:16:02.405 { 00:16:02.405 "name": "BaseBdev4", 00:16:02.405 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:02.405 "is_configured": false, 00:16:02.405 "data_offset": 0, 00:16:02.405 "data_size": 0 00:16:02.405 } 00:16:02.405 ] 00:16:02.405 }' 00:16:02.405 15:23:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:02.405 15:23:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:02.679 15:23:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:16:02.679 15:23:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.679 15:23:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:02.679 [2024-11-20 15:23:49.136172] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:02.679 [2024-11-20 15:23:49.136471] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:02.679 [2024-11-20 15:23:49.136488] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:02.679 [2024-11-20 15:23:49.136814] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:16:02.679 BaseBdev4 00:16:02.679 15:23:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.679 15:23:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:16:02.679 15:23:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:16:02.679 15:23:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:02.679 15:23:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:02.679 15:23:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:02.679 15:23:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:02.679 15:23:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:02.679 15:23:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.679 15:23:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:02.679 [2024-11-20 15:23:49.144148] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:02.679 [2024-11-20 15:23:49.144179] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:16:02.679 [2024-11-20 15:23:49.144467] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:02.951 15:23:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.951 15:23:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:02.951 15:23:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.951 15:23:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:02.951 [ 00:16:02.951 { 00:16:02.951 "name": "BaseBdev4", 00:16:02.951 "aliases": [ 00:16:02.951 "98fac42a-de92-4890-a36c-c714f530f1c2" 00:16:02.951 ], 00:16:02.951 "product_name": "Malloc disk", 00:16:02.951 "block_size": 512, 00:16:02.951 "num_blocks": 65536, 00:16:02.952 "uuid": "98fac42a-de92-4890-a36c-c714f530f1c2", 00:16:02.952 "assigned_rate_limits": { 00:16:02.952 "rw_ios_per_sec": 0, 00:16:02.952 "rw_mbytes_per_sec": 0, 00:16:02.952 "r_mbytes_per_sec": 0, 00:16:02.952 "w_mbytes_per_sec": 0 00:16:02.952 }, 00:16:02.952 "claimed": true, 00:16:02.952 "claim_type": "exclusive_write", 00:16:02.952 "zoned": false, 00:16:02.952 "supported_io_types": { 00:16:02.952 "read": true, 00:16:02.952 "write": true, 00:16:02.952 "unmap": true, 00:16:02.952 "flush": true, 00:16:02.952 "reset": true, 00:16:02.952 "nvme_admin": false, 00:16:02.952 "nvme_io": false, 00:16:02.952 "nvme_io_md": false, 00:16:02.952 "write_zeroes": true, 00:16:02.952 "zcopy": true, 00:16:02.952 "get_zone_info": false, 00:16:02.952 "zone_management": false, 00:16:02.952 "zone_append": false, 00:16:02.952 "compare": false, 00:16:02.952 "compare_and_write": false, 00:16:02.952 "abort": true, 00:16:02.952 "seek_hole": false, 00:16:02.952 "seek_data": false, 00:16:02.952 "copy": true, 00:16:02.952 "nvme_iov_md": false 00:16:02.952 }, 00:16:02.952 "memory_domains": [ 00:16:02.952 { 00:16:02.952 "dma_device_id": "system", 00:16:02.952 "dma_device_type": 1 00:16:02.952 }, 00:16:02.952 { 00:16:02.952 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:02.952 "dma_device_type": 2 00:16:02.952 } 00:16:02.952 ], 00:16:02.952 "driver_specific": {} 00:16:02.952 } 00:16:02.952 ] 00:16:02.952 15:23:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.952 15:23:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:02.952 15:23:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:02.952 15:23:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:02.952 15:23:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:16:02.952 15:23:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:02.952 15:23:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:02.952 15:23:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:02.952 15:23:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:02.952 15:23:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:02.952 15:23:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:02.952 15:23:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:02.952 15:23:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:02.952 15:23:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:02.952 15:23:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:02.952 15:23:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:02.952 15:23:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.952 15:23:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:02.952 15:23:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.952 15:23:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:02.952 "name": "Existed_Raid", 00:16:02.952 "uuid": "01cd3e0a-ef92-4bbd-aebc-3f2f8334fe28", 00:16:02.952 "strip_size_kb": 64, 00:16:02.952 "state": "online", 00:16:02.952 "raid_level": "raid5f", 00:16:02.952 "superblock": true, 00:16:02.952 "num_base_bdevs": 4, 00:16:02.952 "num_base_bdevs_discovered": 4, 00:16:02.952 "num_base_bdevs_operational": 4, 00:16:02.952 "base_bdevs_list": [ 00:16:02.952 { 00:16:02.952 "name": "BaseBdev1", 00:16:02.952 "uuid": "012598ff-f5c6-4044-b4ba-8df57aaf033a", 00:16:02.952 "is_configured": true, 00:16:02.952 "data_offset": 2048, 00:16:02.952 "data_size": 63488 00:16:02.952 }, 00:16:02.952 { 00:16:02.952 "name": "BaseBdev2", 00:16:02.952 "uuid": "fabafb5e-6cb0-4db5-a84e-b67820da565a", 00:16:02.952 "is_configured": true, 00:16:02.952 "data_offset": 2048, 00:16:02.952 "data_size": 63488 00:16:02.952 }, 00:16:02.952 { 00:16:02.952 "name": "BaseBdev3", 00:16:02.952 "uuid": "9fa0e693-d15c-444c-8019-6cd4db4195fa", 00:16:02.952 "is_configured": true, 00:16:02.952 "data_offset": 2048, 00:16:02.952 "data_size": 63488 00:16:02.952 }, 00:16:02.952 { 00:16:02.952 "name": "BaseBdev4", 00:16:02.952 "uuid": "98fac42a-de92-4890-a36c-c714f530f1c2", 00:16:02.952 "is_configured": true, 00:16:02.952 "data_offset": 2048, 00:16:02.952 "data_size": 63488 00:16:02.952 } 00:16:02.952 ] 00:16:02.952 }' 00:16:02.952 15:23:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:02.952 15:23:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:03.212 15:23:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:16:03.212 15:23:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:03.212 15:23:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:03.212 15:23:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:03.212 15:23:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:16:03.212 15:23:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:03.212 15:23:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:03.212 15:23:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.212 15:23:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:03.212 15:23:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:03.212 [2024-11-20 15:23:49.584743] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:03.212 15:23:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.212 15:23:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:03.212 "name": "Existed_Raid", 00:16:03.212 "aliases": [ 00:16:03.212 "01cd3e0a-ef92-4bbd-aebc-3f2f8334fe28" 00:16:03.212 ], 00:16:03.212 "product_name": "Raid Volume", 00:16:03.212 "block_size": 512, 00:16:03.212 "num_blocks": 190464, 00:16:03.212 "uuid": "01cd3e0a-ef92-4bbd-aebc-3f2f8334fe28", 00:16:03.212 "assigned_rate_limits": { 00:16:03.212 "rw_ios_per_sec": 0, 00:16:03.212 "rw_mbytes_per_sec": 0, 00:16:03.212 "r_mbytes_per_sec": 0, 00:16:03.212 "w_mbytes_per_sec": 0 00:16:03.212 }, 00:16:03.212 "claimed": false, 00:16:03.212 "zoned": false, 00:16:03.212 "supported_io_types": { 00:16:03.212 "read": true, 00:16:03.212 "write": true, 00:16:03.212 "unmap": false, 00:16:03.212 "flush": false, 00:16:03.212 "reset": true, 00:16:03.212 "nvme_admin": false, 00:16:03.212 "nvme_io": false, 00:16:03.212 "nvme_io_md": false, 00:16:03.212 "write_zeroes": true, 00:16:03.212 "zcopy": false, 00:16:03.212 "get_zone_info": false, 00:16:03.212 "zone_management": false, 00:16:03.212 "zone_append": false, 00:16:03.212 "compare": false, 00:16:03.212 "compare_and_write": false, 00:16:03.212 "abort": false, 00:16:03.212 "seek_hole": false, 00:16:03.212 "seek_data": false, 00:16:03.212 "copy": false, 00:16:03.212 "nvme_iov_md": false 00:16:03.212 }, 00:16:03.212 "driver_specific": { 00:16:03.212 "raid": { 00:16:03.212 "uuid": "01cd3e0a-ef92-4bbd-aebc-3f2f8334fe28", 00:16:03.212 "strip_size_kb": 64, 00:16:03.212 "state": "online", 00:16:03.212 "raid_level": "raid5f", 00:16:03.212 "superblock": true, 00:16:03.212 "num_base_bdevs": 4, 00:16:03.212 "num_base_bdevs_discovered": 4, 00:16:03.212 "num_base_bdevs_operational": 4, 00:16:03.212 "base_bdevs_list": [ 00:16:03.212 { 00:16:03.212 "name": "BaseBdev1", 00:16:03.212 "uuid": "012598ff-f5c6-4044-b4ba-8df57aaf033a", 00:16:03.212 "is_configured": true, 00:16:03.212 "data_offset": 2048, 00:16:03.212 "data_size": 63488 00:16:03.212 }, 00:16:03.212 { 00:16:03.212 "name": "BaseBdev2", 00:16:03.212 "uuid": "fabafb5e-6cb0-4db5-a84e-b67820da565a", 00:16:03.212 "is_configured": true, 00:16:03.212 "data_offset": 2048, 00:16:03.212 "data_size": 63488 00:16:03.212 }, 00:16:03.212 { 00:16:03.212 "name": "BaseBdev3", 00:16:03.212 "uuid": "9fa0e693-d15c-444c-8019-6cd4db4195fa", 00:16:03.212 "is_configured": true, 00:16:03.212 "data_offset": 2048, 00:16:03.212 "data_size": 63488 00:16:03.212 }, 00:16:03.212 { 00:16:03.212 "name": "BaseBdev4", 00:16:03.212 "uuid": "98fac42a-de92-4890-a36c-c714f530f1c2", 00:16:03.212 "is_configured": true, 00:16:03.212 "data_offset": 2048, 00:16:03.212 "data_size": 63488 00:16:03.212 } 00:16:03.212 ] 00:16:03.212 } 00:16:03.212 } 00:16:03.212 }' 00:16:03.212 15:23:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:03.212 15:23:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:16:03.212 BaseBdev2 00:16:03.212 BaseBdev3 00:16:03.212 BaseBdev4' 00:16:03.212 15:23:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:03.471 15:23:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:03.471 15:23:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:03.471 15:23:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:03.471 15:23:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:16:03.471 15:23:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.471 15:23:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:03.471 15:23:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.471 15:23:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:03.471 15:23:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:03.471 15:23:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:03.471 15:23:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:03.471 15:23:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.471 15:23:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:03.471 15:23:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:03.471 15:23:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.471 15:23:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:03.471 15:23:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:03.471 15:23:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:03.471 15:23:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:03.471 15:23:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.471 15:23:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:03.471 15:23:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:03.471 15:23:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.471 15:23:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:03.471 15:23:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:03.471 15:23:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:03.471 15:23:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:16:03.471 15:23:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:03.471 15:23:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.471 15:23:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:03.471 15:23:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.471 15:23:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:03.471 15:23:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:03.471 15:23:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:03.471 15:23:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.472 15:23:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:03.472 [2024-11-20 15:23:49.892140] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:03.730 15:23:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.730 15:23:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:16:03.730 15:23:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:16:03.730 15:23:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:03.730 15:23:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:16:03.730 15:23:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:16:03.730 15:23:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:16:03.730 15:23:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:03.730 15:23:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:03.730 15:23:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:03.730 15:23:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:03.730 15:23:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:03.730 15:23:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:03.730 15:23:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:03.730 15:23:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:03.730 15:23:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:03.730 15:23:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:03.730 15:23:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.730 15:23:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:03.730 15:23:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:03.730 15:23:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.730 15:23:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:03.730 "name": "Existed_Raid", 00:16:03.730 "uuid": "01cd3e0a-ef92-4bbd-aebc-3f2f8334fe28", 00:16:03.730 "strip_size_kb": 64, 00:16:03.730 "state": "online", 00:16:03.730 "raid_level": "raid5f", 00:16:03.730 "superblock": true, 00:16:03.730 "num_base_bdevs": 4, 00:16:03.730 "num_base_bdevs_discovered": 3, 00:16:03.730 "num_base_bdevs_operational": 3, 00:16:03.730 "base_bdevs_list": [ 00:16:03.730 { 00:16:03.730 "name": null, 00:16:03.730 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:03.730 "is_configured": false, 00:16:03.730 "data_offset": 0, 00:16:03.730 "data_size": 63488 00:16:03.730 }, 00:16:03.730 { 00:16:03.731 "name": "BaseBdev2", 00:16:03.731 "uuid": "fabafb5e-6cb0-4db5-a84e-b67820da565a", 00:16:03.731 "is_configured": true, 00:16:03.731 "data_offset": 2048, 00:16:03.731 "data_size": 63488 00:16:03.731 }, 00:16:03.731 { 00:16:03.731 "name": "BaseBdev3", 00:16:03.731 "uuid": "9fa0e693-d15c-444c-8019-6cd4db4195fa", 00:16:03.731 "is_configured": true, 00:16:03.731 "data_offset": 2048, 00:16:03.731 "data_size": 63488 00:16:03.731 }, 00:16:03.731 { 00:16:03.731 "name": "BaseBdev4", 00:16:03.731 "uuid": "98fac42a-de92-4890-a36c-c714f530f1c2", 00:16:03.731 "is_configured": true, 00:16:03.731 "data_offset": 2048, 00:16:03.731 "data_size": 63488 00:16:03.731 } 00:16:03.731 ] 00:16:03.731 }' 00:16:03.731 15:23:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:03.731 15:23:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:03.989 15:23:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:16:03.989 15:23:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:03.989 15:23:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:03.989 15:23:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.989 15:23:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:03.989 15:23:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:03.989 15:23:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.248 15:23:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:04.248 15:23:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:04.248 15:23:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:16:04.248 15:23:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.248 15:23:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.248 [2024-11-20 15:23:50.477436] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:04.248 [2024-11-20 15:23:50.477612] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:04.248 [2024-11-20 15:23:50.571871] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:04.248 15:23:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.248 15:23:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:04.248 15:23:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:04.248 15:23:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:04.248 15:23:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:04.248 15:23:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.248 15:23:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.248 15:23:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.248 15:23:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:04.248 15:23:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:04.248 15:23:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:16:04.248 15:23:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.248 15:23:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.248 [2024-11-20 15:23:50.627886] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:04.248 15:23:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.248 15:23:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:04.248 15:23:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:04.507 15:23:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:04.507 15:23:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:04.507 15:23:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.507 15:23:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.507 15:23:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.507 15:23:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:04.507 15:23:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:04.507 15:23:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:16:04.507 15:23:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.507 15:23:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.507 [2024-11-20 15:23:50.781009] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:16:04.507 [2024-11-20 15:23:50.781245] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:16:04.507 15:23:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.507 15:23:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:04.507 15:23:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:04.507 15:23:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:04.507 15:23:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:16:04.507 15:23:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.507 15:23:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.507 15:23:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.507 15:23:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:16:04.507 15:23:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:16:04.507 15:23:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:16:04.507 15:23:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:16:04.507 15:23:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:04.507 15:23:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:04.507 15:23:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.507 15:23:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.507 BaseBdev2 00:16:04.507 15:23:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.507 15:23:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:16:04.507 15:23:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:16:04.507 15:23:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:04.507 15:23:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:04.507 15:23:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:04.507 15:23:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:04.507 15:23:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:04.507 15:23:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.507 15:23:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.507 15:23:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.507 15:23:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:04.507 15:23:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.507 15:23:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.766 [ 00:16:04.766 { 00:16:04.766 "name": "BaseBdev2", 00:16:04.766 "aliases": [ 00:16:04.766 "9ead094b-5af2-447d-8365-d8ae40ca620c" 00:16:04.766 ], 00:16:04.766 "product_name": "Malloc disk", 00:16:04.766 "block_size": 512, 00:16:04.766 "num_blocks": 65536, 00:16:04.766 "uuid": "9ead094b-5af2-447d-8365-d8ae40ca620c", 00:16:04.766 "assigned_rate_limits": { 00:16:04.766 "rw_ios_per_sec": 0, 00:16:04.766 "rw_mbytes_per_sec": 0, 00:16:04.766 "r_mbytes_per_sec": 0, 00:16:04.766 "w_mbytes_per_sec": 0 00:16:04.766 }, 00:16:04.766 "claimed": false, 00:16:04.766 "zoned": false, 00:16:04.767 "supported_io_types": { 00:16:04.767 "read": true, 00:16:04.767 "write": true, 00:16:04.767 "unmap": true, 00:16:04.767 "flush": true, 00:16:04.767 "reset": true, 00:16:04.767 "nvme_admin": false, 00:16:04.767 "nvme_io": false, 00:16:04.767 "nvme_io_md": false, 00:16:04.767 "write_zeroes": true, 00:16:04.767 "zcopy": true, 00:16:04.767 "get_zone_info": false, 00:16:04.767 "zone_management": false, 00:16:04.767 "zone_append": false, 00:16:04.767 "compare": false, 00:16:04.767 "compare_and_write": false, 00:16:04.767 "abort": true, 00:16:04.767 "seek_hole": false, 00:16:04.767 "seek_data": false, 00:16:04.767 "copy": true, 00:16:04.767 "nvme_iov_md": false 00:16:04.767 }, 00:16:04.767 "memory_domains": [ 00:16:04.767 { 00:16:04.767 "dma_device_id": "system", 00:16:04.767 "dma_device_type": 1 00:16:04.767 }, 00:16:04.767 { 00:16:04.767 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:04.767 "dma_device_type": 2 00:16:04.767 } 00:16:04.767 ], 00:16:04.767 "driver_specific": {} 00:16:04.767 } 00:16:04.767 ] 00:16:04.767 15:23:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.767 15:23:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:04.767 15:23:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:04.767 15:23:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:04.767 15:23:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:04.767 15:23:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.767 15:23:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.767 BaseBdev3 00:16:04.767 15:23:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.767 15:23:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:16:04.767 15:23:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:16:04.767 15:23:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:04.767 15:23:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:04.767 15:23:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:04.767 15:23:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:04.767 15:23:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:04.767 15:23:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.767 15:23:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.767 15:23:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.767 15:23:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:04.767 15:23:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.767 15:23:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.767 [ 00:16:04.767 { 00:16:04.767 "name": "BaseBdev3", 00:16:04.767 "aliases": [ 00:16:04.767 "9c7cf1c5-e8d4-4b7d-b549-8f419e2c5d1b" 00:16:04.767 ], 00:16:04.767 "product_name": "Malloc disk", 00:16:04.767 "block_size": 512, 00:16:04.767 "num_blocks": 65536, 00:16:04.767 "uuid": "9c7cf1c5-e8d4-4b7d-b549-8f419e2c5d1b", 00:16:04.767 "assigned_rate_limits": { 00:16:04.767 "rw_ios_per_sec": 0, 00:16:04.767 "rw_mbytes_per_sec": 0, 00:16:04.767 "r_mbytes_per_sec": 0, 00:16:04.767 "w_mbytes_per_sec": 0 00:16:04.767 }, 00:16:04.767 "claimed": false, 00:16:04.767 "zoned": false, 00:16:04.767 "supported_io_types": { 00:16:04.767 "read": true, 00:16:04.767 "write": true, 00:16:04.767 "unmap": true, 00:16:04.767 "flush": true, 00:16:04.767 "reset": true, 00:16:04.767 "nvme_admin": false, 00:16:04.767 "nvme_io": false, 00:16:04.767 "nvme_io_md": false, 00:16:04.767 "write_zeroes": true, 00:16:04.767 "zcopy": true, 00:16:04.767 "get_zone_info": false, 00:16:04.767 "zone_management": false, 00:16:04.767 "zone_append": false, 00:16:04.767 "compare": false, 00:16:04.767 "compare_and_write": false, 00:16:04.767 "abort": true, 00:16:04.767 "seek_hole": false, 00:16:04.767 "seek_data": false, 00:16:04.767 "copy": true, 00:16:04.767 "nvme_iov_md": false 00:16:04.767 }, 00:16:04.767 "memory_domains": [ 00:16:04.767 { 00:16:04.767 "dma_device_id": "system", 00:16:04.767 "dma_device_type": 1 00:16:04.767 }, 00:16:04.767 { 00:16:04.767 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:04.767 "dma_device_type": 2 00:16:04.767 } 00:16:04.767 ], 00:16:04.767 "driver_specific": {} 00:16:04.767 } 00:16:04.767 ] 00:16:04.767 15:23:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.767 15:23:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:04.767 15:23:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:04.767 15:23:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:04.767 15:23:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:16:04.767 15:23:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.767 15:23:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.767 BaseBdev4 00:16:04.767 15:23:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.767 15:23:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:16:04.767 15:23:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:16:04.767 15:23:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:04.767 15:23:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:04.767 15:23:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:04.767 15:23:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:04.767 15:23:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:04.767 15:23:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.767 15:23:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.767 15:23:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.767 15:23:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:04.767 15:23:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.767 15:23:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.767 [ 00:16:04.767 { 00:16:04.767 "name": "BaseBdev4", 00:16:04.767 "aliases": [ 00:16:04.767 "6c4c33dd-a071-460c-ad05-d1e3238d1894" 00:16:04.767 ], 00:16:04.767 "product_name": "Malloc disk", 00:16:04.767 "block_size": 512, 00:16:04.767 "num_blocks": 65536, 00:16:04.767 "uuid": "6c4c33dd-a071-460c-ad05-d1e3238d1894", 00:16:04.767 "assigned_rate_limits": { 00:16:04.767 "rw_ios_per_sec": 0, 00:16:04.767 "rw_mbytes_per_sec": 0, 00:16:04.767 "r_mbytes_per_sec": 0, 00:16:04.767 "w_mbytes_per_sec": 0 00:16:04.767 }, 00:16:04.767 "claimed": false, 00:16:04.767 "zoned": false, 00:16:04.767 "supported_io_types": { 00:16:04.767 "read": true, 00:16:04.767 "write": true, 00:16:04.767 "unmap": true, 00:16:04.767 "flush": true, 00:16:04.767 "reset": true, 00:16:04.767 "nvme_admin": false, 00:16:04.767 "nvme_io": false, 00:16:04.767 "nvme_io_md": false, 00:16:04.767 "write_zeroes": true, 00:16:04.767 "zcopy": true, 00:16:04.767 "get_zone_info": false, 00:16:04.767 "zone_management": false, 00:16:04.767 "zone_append": false, 00:16:04.767 "compare": false, 00:16:04.767 "compare_and_write": false, 00:16:04.767 "abort": true, 00:16:04.767 "seek_hole": false, 00:16:04.767 "seek_data": false, 00:16:04.767 "copy": true, 00:16:04.767 "nvme_iov_md": false 00:16:04.767 }, 00:16:04.767 "memory_domains": [ 00:16:04.767 { 00:16:04.767 "dma_device_id": "system", 00:16:04.767 "dma_device_type": 1 00:16:04.767 }, 00:16:04.767 { 00:16:04.767 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:04.767 "dma_device_type": 2 00:16:04.767 } 00:16:04.767 ], 00:16:04.767 "driver_specific": {} 00:16:04.767 } 00:16:04.767 ] 00:16:04.767 15:23:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.767 15:23:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:04.767 15:23:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:04.767 15:23:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:04.767 15:23:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:04.767 15:23:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.767 15:23:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.767 [2024-11-20 15:23:51.205235] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:04.767 [2024-11-20 15:23:51.205294] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:04.767 [2024-11-20 15:23:51.205324] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:04.767 [2024-11-20 15:23:51.207555] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:04.768 [2024-11-20 15:23:51.207611] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:04.768 15:23:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.768 15:23:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:04.768 15:23:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:04.768 15:23:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:04.768 15:23:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:04.768 15:23:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:04.768 15:23:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:04.768 15:23:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:04.768 15:23:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:04.768 15:23:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:04.768 15:23:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:04.768 15:23:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:04.768 15:23:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:04.768 15:23:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.768 15:23:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.768 15:23:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.027 15:23:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:05.027 "name": "Existed_Raid", 00:16:05.027 "uuid": "eeb076a0-2172-4942-b357-7d273dbaed7d", 00:16:05.027 "strip_size_kb": 64, 00:16:05.027 "state": "configuring", 00:16:05.027 "raid_level": "raid5f", 00:16:05.027 "superblock": true, 00:16:05.027 "num_base_bdevs": 4, 00:16:05.027 "num_base_bdevs_discovered": 3, 00:16:05.027 "num_base_bdevs_operational": 4, 00:16:05.027 "base_bdevs_list": [ 00:16:05.027 { 00:16:05.027 "name": "BaseBdev1", 00:16:05.027 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:05.027 "is_configured": false, 00:16:05.027 "data_offset": 0, 00:16:05.027 "data_size": 0 00:16:05.027 }, 00:16:05.027 { 00:16:05.027 "name": "BaseBdev2", 00:16:05.027 "uuid": "9ead094b-5af2-447d-8365-d8ae40ca620c", 00:16:05.027 "is_configured": true, 00:16:05.027 "data_offset": 2048, 00:16:05.027 "data_size": 63488 00:16:05.027 }, 00:16:05.027 { 00:16:05.027 "name": "BaseBdev3", 00:16:05.027 "uuid": "9c7cf1c5-e8d4-4b7d-b549-8f419e2c5d1b", 00:16:05.027 "is_configured": true, 00:16:05.027 "data_offset": 2048, 00:16:05.027 "data_size": 63488 00:16:05.027 }, 00:16:05.027 { 00:16:05.027 "name": "BaseBdev4", 00:16:05.027 "uuid": "6c4c33dd-a071-460c-ad05-d1e3238d1894", 00:16:05.027 "is_configured": true, 00:16:05.027 "data_offset": 2048, 00:16:05.027 "data_size": 63488 00:16:05.027 } 00:16:05.027 ] 00:16:05.027 }' 00:16:05.027 15:23:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:05.027 15:23:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:05.287 15:23:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:16:05.287 15:23:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.287 15:23:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:05.287 [2024-11-20 15:23:51.640626] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:05.287 15:23:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.287 15:23:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:05.287 15:23:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:05.287 15:23:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:05.287 15:23:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:05.287 15:23:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:05.287 15:23:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:05.287 15:23:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:05.287 15:23:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:05.287 15:23:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:05.287 15:23:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:05.287 15:23:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:05.287 15:23:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.287 15:23:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:05.287 15:23:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:05.287 15:23:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.287 15:23:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:05.287 "name": "Existed_Raid", 00:16:05.287 "uuid": "eeb076a0-2172-4942-b357-7d273dbaed7d", 00:16:05.287 "strip_size_kb": 64, 00:16:05.287 "state": "configuring", 00:16:05.287 "raid_level": "raid5f", 00:16:05.288 "superblock": true, 00:16:05.288 "num_base_bdevs": 4, 00:16:05.288 "num_base_bdevs_discovered": 2, 00:16:05.288 "num_base_bdevs_operational": 4, 00:16:05.288 "base_bdevs_list": [ 00:16:05.288 { 00:16:05.288 "name": "BaseBdev1", 00:16:05.288 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:05.288 "is_configured": false, 00:16:05.288 "data_offset": 0, 00:16:05.288 "data_size": 0 00:16:05.288 }, 00:16:05.288 { 00:16:05.288 "name": null, 00:16:05.288 "uuid": "9ead094b-5af2-447d-8365-d8ae40ca620c", 00:16:05.288 "is_configured": false, 00:16:05.288 "data_offset": 0, 00:16:05.288 "data_size": 63488 00:16:05.288 }, 00:16:05.288 { 00:16:05.288 "name": "BaseBdev3", 00:16:05.288 "uuid": "9c7cf1c5-e8d4-4b7d-b549-8f419e2c5d1b", 00:16:05.288 "is_configured": true, 00:16:05.288 "data_offset": 2048, 00:16:05.288 "data_size": 63488 00:16:05.288 }, 00:16:05.288 { 00:16:05.288 "name": "BaseBdev4", 00:16:05.288 "uuid": "6c4c33dd-a071-460c-ad05-d1e3238d1894", 00:16:05.288 "is_configured": true, 00:16:05.288 "data_offset": 2048, 00:16:05.288 "data_size": 63488 00:16:05.288 } 00:16:05.288 ] 00:16:05.288 }' 00:16:05.288 15:23:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:05.288 15:23:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:05.854 15:23:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:05.854 15:23:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:05.854 15:23:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.854 15:23:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:05.854 15:23:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.854 15:23:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:16:05.854 15:23:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:05.854 15:23:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.854 15:23:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:05.854 [2024-11-20 15:23:52.125963] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:05.854 BaseBdev1 00:16:05.854 15:23:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.854 15:23:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:16:05.854 15:23:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:16:05.854 15:23:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:05.854 15:23:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:05.854 15:23:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:05.854 15:23:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:05.854 15:23:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:05.854 15:23:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.854 15:23:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:05.854 15:23:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.854 15:23:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:05.854 15:23:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.854 15:23:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:05.854 [ 00:16:05.854 { 00:16:05.854 "name": "BaseBdev1", 00:16:05.854 "aliases": [ 00:16:05.854 "d497b072-97c0-49d7-8d3e-dc95eaf009fd" 00:16:05.854 ], 00:16:05.854 "product_name": "Malloc disk", 00:16:05.854 "block_size": 512, 00:16:05.854 "num_blocks": 65536, 00:16:05.854 "uuid": "d497b072-97c0-49d7-8d3e-dc95eaf009fd", 00:16:05.854 "assigned_rate_limits": { 00:16:05.854 "rw_ios_per_sec": 0, 00:16:05.854 "rw_mbytes_per_sec": 0, 00:16:05.854 "r_mbytes_per_sec": 0, 00:16:05.854 "w_mbytes_per_sec": 0 00:16:05.854 }, 00:16:05.854 "claimed": true, 00:16:05.854 "claim_type": "exclusive_write", 00:16:05.854 "zoned": false, 00:16:05.854 "supported_io_types": { 00:16:05.854 "read": true, 00:16:05.854 "write": true, 00:16:05.854 "unmap": true, 00:16:05.854 "flush": true, 00:16:05.854 "reset": true, 00:16:05.854 "nvme_admin": false, 00:16:05.854 "nvme_io": false, 00:16:05.854 "nvme_io_md": false, 00:16:05.854 "write_zeroes": true, 00:16:05.854 "zcopy": true, 00:16:05.854 "get_zone_info": false, 00:16:05.854 "zone_management": false, 00:16:05.854 "zone_append": false, 00:16:05.854 "compare": false, 00:16:05.854 "compare_and_write": false, 00:16:05.854 "abort": true, 00:16:05.854 "seek_hole": false, 00:16:05.854 "seek_data": false, 00:16:05.854 "copy": true, 00:16:05.855 "nvme_iov_md": false 00:16:05.855 }, 00:16:05.855 "memory_domains": [ 00:16:05.855 { 00:16:05.855 "dma_device_id": "system", 00:16:05.855 "dma_device_type": 1 00:16:05.855 }, 00:16:05.855 { 00:16:05.855 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:05.855 "dma_device_type": 2 00:16:05.855 } 00:16:05.855 ], 00:16:05.855 "driver_specific": {} 00:16:05.855 } 00:16:05.855 ] 00:16:05.855 15:23:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.855 15:23:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:05.855 15:23:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:05.855 15:23:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:05.855 15:23:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:05.855 15:23:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:05.855 15:23:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:05.855 15:23:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:05.855 15:23:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:05.855 15:23:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:05.855 15:23:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:05.855 15:23:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:05.855 15:23:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:05.855 15:23:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.855 15:23:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:05.855 15:23:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:05.855 15:23:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.855 15:23:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:05.855 "name": "Existed_Raid", 00:16:05.855 "uuid": "eeb076a0-2172-4942-b357-7d273dbaed7d", 00:16:05.855 "strip_size_kb": 64, 00:16:05.855 "state": "configuring", 00:16:05.855 "raid_level": "raid5f", 00:16:05.855 "superblock": true, 00:16:05.855 "num_base_bdevs": 4, 00:16:05.855 "num_base_bdevs_discovered": 3, 00:16:05.855 "num_base_bdevs_operational": 4, 00:16:05.855 "base_bdevs_list": [ 00:16:05.855 { 00:16:05.855 "name": "BaseBdev1", 00:16:05.855 "uuid": "d497b072-97c0-49d7-8d3e-dc95eaf009fd", 00:16:05.855 "is_configured": true, 00:16:05.855 "data_offset": 2048, 00:16:05.855 "data_size": 63488 00:16:05.855 }, 00:16:05.855 { 00:16:05.855 "name": null, 00:16:05.855 "uuid": "9ead094b-5af2-447d-8365-d8ae40ca620c", 00:16:05.855 "is_configured": false, 00:16:05.855 "data_offset": 0, 00:16:05.855 "data_size": 63488 00:16:05.855 }, 00:16:05.855 { 00:16:05.855 "name": "BaseBdev3", 00:16:05.855 "uuid": "9c7cf1c5-e8d4-4b7d-b549-8f419e2c5d1b", 00:16:05.855 "is_configured": true, 00:16:05.855 "data_offset": 2048, 00:16:05.855 "data_size": 63488 00:16:05.855 }, 00:16:05.855 { 00:16:05.855 "name": "BaseBdev4", 00:16:05.855 "uuid": "6c4c33dd-a071-460c-ad05-d1e3238d1894", 00:16:05.855 "is_configured": true, 00:16:05.855 "data_offset": 2048, 00:16:05.855 "data_size": 63488 00:16:05.855 } 00:16:05.855 ] 00:16:05.855 }' 00:16:05.855 15:23:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:05.855 15:23:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:06.424 15:23:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:06.424 15:23:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:06.424 15:23:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.424 15:23:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:06.424 15:23:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.424 15:23:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:16:06.424 15:23:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:16:06.424 15:23:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.424 15:23:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:06.424 [2024-11-20 15:23:52.665380] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:06.424 15:23:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.424 15:23:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:06.424 15:23:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:06.424 15:23:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:06.424 15:23:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:06.424 15:23:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:06.424 15:23:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:06.424 15:23:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:06.424 15:23:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:06.424 15:23:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:06.424 15:23:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:06.424 15:23:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:06.424 15:23:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:06.424 15:23:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.424 15:23:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:06.424 15:23:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.424 15:23:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:06.424 "name": "Existed_Raid", 00:16:06.424 "uuid": "eeb076a0-2172-4942-b357-7d273dbaed7d", 00:16:06.424 "strip_size_kb": 64, 00:16:06.424 "state": "configuring", 00:16:06.424 "raid_level": "raid5f", 00:16:06.424 "superblock": true, 00:16:06.424 "num_base_bdevs": 4, 00:16:06.424 "num_base_bdevs_discovered": 2, 00:16:06.424 "num_base_bdevs_operational": 4, 00:16:06.424 "base_bdevs_list": [ 00:16:06.424 { 00:16:06.424 "name": "BaseBdev1", 00:16:06.424 "uuid": "d497b072-97c0-49d7-8d3e-dc95eaf009fd", 00:16:06.424 "is_configured": true, 00:16:06.424 "data_offset": 2048, 00:16:06.424 "data_size": 63488 00:16:06.424 }, 00:16:06.424 { 00:16:06.424 "name": null, 00:16:06.424 "uuid": "9ead094b-5af2-447d-8365-d8ae40ca620c", 00:16:06.424 "is_configured": false, 00:16:06.424 "data_offset": 0, 00:16:06.424 "data_size": 63488 00:16:06.425 }, 00:16:06.425 { 00:16:06.425 "name": null, 00:16:06.425 "uuid": "9c7cf1c5-e8d4-4b7d-b549-8f419e2c5d1b", 00:16:06.425 "is_configured": false, 00:16:06.425 "data_offset": 0, 00:16:06.425 "data_size": 63488 00:16:06.425 }, 00:16:06.425 { 00:16:06.425 "name": "BaseBdev4", 00:16:06.425 "uuid": "6c4c33dd-a071-460c-ad05-d1e3238d1894", 00:16:06.425 "is_configured": true, 00:16:06.425 "data_offset": 2048, 00:16:06.425 "data_size": 63488 00:16:06.425 } 00:16:06.425 ] 00:16:06.425 }' 00:16:06.425 15:23:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:06.425 15:23:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:06.683 15:23:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:06.683 15:23:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.683 15:23:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:06.683 15:23:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:06.683 15:23:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.683 15:23:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:16:06.683 15:23:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:16:06.683 15:23:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.683 15:23:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:06.683 [2024-11-20 15:23:53.136707] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:06.683 15:23:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.683 15:23:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:06.683 15:23:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:06.683 15:23:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:06.683 15:23:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:06.683 15:23:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:06.683 15:23:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:06.683 15:23:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:06.683 15:23:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:06.683 15:23:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:06.683 15:23:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:06.683 15:23:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:06.683 15:23:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.683 15:23:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:06.683 15:23:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:06.942 15:23:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.942 15:23:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:06.942 "name": "Existed_Raid", 00:16:06.942 "uuid": "eeb076a0-2172-4942-b357-7d273dbaed7d", 00:16:06.942 "strip_size_kb": 64, 00:16:06.942 "state": "configuring", 00:16:06.942 "raid_level": "raid5f", 00:16:06.942 "superblock": true, 00:16:06.942 "num_base_bdevs": 4, 00:16:06.942 "num_base_bdevs_discovered": 3, 00:16:06.942 "num_base_bdevs_operational": 4, 00:16:06.942 "base_bdevs_list": [ 00:16:06.942 { 00:16:06.942 "name": "BaseBdev1", 00:16:06.942 "uuid": "d497b072-97c0-49d7-8d3e-dc95eaf009fd", 00:16:06.942 "is_configured": true, 00:16:06.942 "data_offset": 2048, 00:16:06.942 "data_size": 63488 00:16:06.942 }, 00:16:06.942 { 00:16:06.942 "name": null, 00:16:06.942 "uuid": "9ead094b-5af2-447d-8365-d8ae40ca620c", 00:16:06.942 "is_configured": false, 00:16:06.942 "data_offset": 0, 00:16:06.942 "data_size": 63488 00:16:06.942 }, 00:16:06.942 { 00:16:06.942 "name": "BaseBdev3", 00:16:06.942 "uuid": "9c7cf1c5-e8d4-4b7d-b549-8f419e2c5d1b", 00:16:06.942 "is_configured": true, 00:16:06.942 "data_offset": 2048, 00:16:06.942 "data_size": 63488 00:16:06.942 }, 00:16:06.942 { 00:16:06.942 "name": "BaseBdev4", 00:16:06.942 "uuid": "6c4c33dd-a071-460c-ad05-d1e3238d1894", 00:16:06.942 "is_configured": true, 00:16:06.942 "data_offset": 2048, 00:16:06.942 "data_size": 63488 00:16:06.942 } 00:16:06.942 ] 00:16:06.942 }' 00:16:06.942 15:23:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:06.942 15:23:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:07.202 15:23:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:07.202 15:23:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.202 15:23:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:07.202 15:23:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:07.202 15:23:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.202 15:23:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:16:07.202 15:23:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:07.202 15:23:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.202 15:23:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:07.202 [2024-11-20 15:23:53.592107] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:07.461 15:23:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.461 15:23:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:07.461 15:23:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:07.461 15:23:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:07.461 15:23:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:07.461 15:23:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:07.461 15:23:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:07.461 15:23:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:07.461 15:23:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:07.461 15:23:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:07.461 15:23:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:07.461 15:23:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:07.461 15:23:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:07.461 15:23:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.461 15:23:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:07.461 15:23:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.461 15:23:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:07.461 "name": "Existed_Raid", 00:16:07.461 "uuid": "eeb076a0-2172-4942-b357-7d273dbaed7d", 00:16:07.461 "strip_size_kb": 64, 00:16:07.461 "state": "configuring", 00:16:07.461 "raid_level": "raid5f", 00:16:07.461 "superblock": true, 00:16:07.461 "num_base_bdevs": 4, 00:16:07.461 "num_base_bdevs_discovered": 2, 00:16:07.461 "num_base_bdevs_operational": 4, 00:16:07.461 "base_bdevs_list": [ 00:16:07.461 { 00:16:07.461 "name": null, 00:16:07.461 "uuid": "d497b072-97c0-49d7-8d3e-dc95eaf009fd", 00:16:07.461 "is_configured": false, 00:16:07.461 "data_offset": 0, 00:16:07.461 "data_size": 63488 00:16:07.461 }, 00:16:07.461 { 00:16:07.461 "name": null, 00:16:07.461 "uuid": "9ead094b-5af2-447d-8365-d8ae40ca620c", 00:16:07.461 "is_configured": false, 00:16:07.461 "data_offset": 0, 00:16:07.461 "data_size": 63488 00:16:07.461 }, 00:16:07.461 { 00:16:07.461 "name": "BaseBdev3", 00:16:07.461 "uuid": "9c7cf1c5-e8d4-4b7d-b549-8f419e2c5d1b", 00:16:07.461 "is_configured": true, 00:16:07.461 "data_offset": 2048, 00:16:07.461 "data_size": 63488 00:16:07.461 }, 00:16:07.461 { 00:16:07.461 "name": "BaseBdev4", 00:16:07.461 "uuid": "6c4c33dd-a071-460c-ad05-d1e3238d1894", 00:16:07.461 "is_configured": true, 00:16:07.461 "data_offset": 2048, 00:16:07.461 "data_size": 63488 00:16:07.461 } 00:16:07.461 ] 00:16:07.461 }' 00:16:07.461 15:23:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:07.461 15:23:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:07.721 15:23:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:07.721 15:23:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:07.721 15:23:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.721 15:23:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:07.721 15:23:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.721 15:23:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:16:07.721 15:23:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:16:07.721 15:23:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.721 15:23:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:07.721 [2024-11-20 15:23:54.138153] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:07.721 15:23:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.721 15:23:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:07.721 15:23:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:07.721 15:23:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:07.721 15:23:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:07.721 15:23:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:07.721 15:23:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:07.721 15:23:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:07.721 15:23:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:07.721 15:23:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:07.721 15:23:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:07.721 15:23:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:07.721 15:23:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:07.721 15:23:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.721 15:23:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:07.721 15:23:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.721 15:23:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:07.721 "name": "Existed_Raid", 00:16:07.721 "uuid": "eeb076a0-2172-4942-b357-7d273dbaed7d", 00:16:07.721 "strip_size_kb": 64, 00:16:07.721 "state": "configuring", 00:16:07.721 "raid_level": "raid5f", 00:16:07.721 "superblock": true, 00:16:07.721 "num_base_bdevs": 4, 00:16:07.721 "num_base_bdevs_discovered": 3, 00:16:07.721 "num_base_bdevs_operational": 4, 00:16:07.721 "base_bdevs_list": [ 00:16:07.721 { 00:16:07.721 "name": null, 00:16:07.721 "uuid": "d497b072-97c0-49d7-8d3e-dc95eaf009fd", 00:16:07.721 "is_configured": false, 00:16:07.721 "data_offset": 0, 00:16:07.721 "data_size": 63488 00:16:07.721 }, 00:16:07.721 { 00:16:07.721 "name": "BaseBdev2", 00:16:07.721 "uuid": "9ead094b-5af2-447d-8365-d8ae40ca620c", 00:16:07.721 "is_configured": true, 00:16:07.721 "data_offset": 2048, 00:16:07.721 "data_size": 63488 00:16:07.721 }, 00:16:07.721 { 00:16:07.721 "name": "BaseBdev3", 00:16:07.721 "uuid": "9c7cf1c5-e8d4-4b7d-b549-8f419e2c5d1b", 00:16:07.721 "is_configured": true, 00:16:07.721 "data_offset": 2048, 00:16:07.721 "data_size": 63488 00:16:07.721 }, 00:16:07.721 { 00:16:07.721 "name": "BaseBdev4", 00:16:07.721 "uuid": "6c4c33dd-a071-460c-ad05-d1e3238d1894", 00:16:07.721 "is_configured": true, 00:16:07.721 "data_offset": 2048, 00:16:07.721 "data_size": 63488 00:16:07.721 } 00:16:07.721 ] 00:16:07.721 }' 00:16:07.721 15:23:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:07.721 15:23:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:08.292 15:23:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:08.292 15:23:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:08.292 15:23:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.292 15:23:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:08.292 15:23:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.292 15:23:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:16:08.292 15:23:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:16:08.292 15:23:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:08.292 15:23:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.292 15:23:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:08.292 15:23:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.292 15:23:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u d497b072-97c0-49d7-8d3e-dc95eaf009fd 00:16:08.292 15:23:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.292 15:23:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:08.292 [2024-11-20 15:23:54.739478] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:16:08.292 [2024-11-20 15:23:54.739852] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:16:08.292 [2024-11-20 15:23:54.739872] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:08.292 NewBaseBdev 00:16:08.292 [2024-11-20 15:23:54.740198] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:16:08.292 15:23:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.292 15:23:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:16:08.292 15:23:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:16:08.292 15:23:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:08.292 15:23:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:08.292 15:23:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:08.292 15:23:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:08.292 15:23:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:08.292 15:23:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.292 15:23:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:08.292 [2024-11-20 15:23:54.747187] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:16:08.292 [2024-11-20 15:23:54.749349] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:16:08.292 [2024-11-20 15:23:54.750411] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:08.292 15:23:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.292 15:23:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:16:08.292 15:23:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.292 15:23:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:08.552 [ 00:16:08.552 { 00:16:08.552 "name": "NewBaseBdev", 00:16:08.552 "aliases": [ 00:16:08.552 "d497b072-97c0-49d7-8d3e-dc95eaf009fd" 00:16:08.552 ], 00:16:08.552 "product_name": "Malloc disk", 00:16:08.552 "block_size": 512, 00:16:08.552 "num_blocks": 65536, 00:16:08.552 "uuid": "d497b072-97c0-49d7-8d3e-dc95eaf009fd", 00:16:08.552 "assigned_rate_limits": { 00:16:08.552 "rw_ios_per_sec": 0, 00:16:08.552 "rw_mbytes_per_sec": 0, 00:16:08.552 "r_mbytes_per_sec": 0, 00:16:08.552 "w_mbytes_per_sec": 0 00:16:08.552 }, 00:16:08.552 "claimed": true, 00:16:08.552 "claim_type": "exclusive_write", 00:16:08.552 "zoned": false, 00:16:08.552 "supported_io_types": { 00:16:08.552 "read": true, 00:16:08.552 "write": true, 00:16:08.552 "unmap": true, 00:16:08.552 "flush": true, 00:16:08.552 "reset": true, 00:16:08.552 "nvme_admin": false, 00:16:08.552 "nvme_io": false, 00:16:08.552 "nvme_io_md": false, 00:16:08.552 "write_zeroes": true, 00:16:08.552 "zcopy": true, 00:16:08.552 "get_zone_info": false, 00:16:08.552 "zone_management": false, 00:16:08.552 "zone_append": false, 00:16:08.552 "compare": false, 00:16:08.552 "compare_and_write": false, 00:16:08.552 "abort": true, 00:16:08.552 "seek_hole": false, 00:16:08.552 "seek_data": false, 00:16:08.552 "copy": true, 00:16:08.552 "nvme_iov_md": false 00:16:08.552 }, 00:16:08.552 "memory_domains": [ 00:16:08.552 { 00:16:08.552 "dma_device_id": "system", 00:16:08.552 "dma_device_type": 1 00:16:08.552 }, 00:16:08.552 { 00:16:08.552 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:08.552 "dma_device_type": 2 00:16:08.552 } 00:16:08.552 ], 00:16:08.552 "driver_specific": {} 00:16:08.552 } 00:16:08.552 ] 00:16:08.552 15:23:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.552 15:23:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:08.552 15:23:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:16:08.552 15:23:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:08.552 15:23:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:08.552 15:23:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:08.552 15:23:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:08.552 15:23:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:08.552 15:23:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:08.552 15:23:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:08.552 15:23:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:08.552 15:23:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:08.552 15:23:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:08.552 15:23:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.552 15:23:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:08.552 15:23:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:08.552 15:23:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.552 15:23:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:08.552 "name": "Existed_Raid", 00:16:08.552 "uuid": "eeb076a0-2172-4942-b357-7d273dbaed7d", 00:16:08.552 "strip_size_kb": 64, 00:16:08.552 "state": "online", 00:16:08.553 "raid_level": "raid5f", 00:16:08.553 "superblock": true, 00:16:08.553 "num_base_bdevs": 4, 00:16:08.553 "num_base_bdevs_discovered": 4, 00:16:08.553 "num_base_bdevs_operational": 4, 00:16:08.553 "base_bdevs_list": [ 00:16:08.553 { 00:16:08.553 "name": "NewBaseBdev", 00:16:08.553 "uuid": "d497b072-97c0-49d7-8d3e-dc95eaf009fd", 00:16:08.553 "is_configured": true, 00:16:08.553 "data_offset": 2048, 00:16:08.553 "data_size": 63488 00:16:08.553 }, 00:16:08.553 { 00:16:08.553 "name": "BaseBdev2", 00:16:08.553 "uuid": "9ead094b-5af2-447d-8365-d8ae40ca620c", 00:16:08.553 "is_configured": true, 00:16:08.553 "data_offset": 2048, 00:16:08.553 "data_size": 63488 00:16:08.553 }, 00:16:08.553 { 00:16:08.553 "name": "BaseBdev3", 00:16:08.553 "uuid": "9c7cf1c5-e8d4-4b7d-b549-8f419e2c5d1b", 00:16:08.553 "is_configured": true, 00:16:08.553 "data_offset": 2048, 00:16:08.553 "data_size": 63488 00:16:08.553 }, 00:16:08.553 { 00:16:08.553 "name": "BaseBdev4", 00:16:08.553 "uuid": "6c4c33dd-a071-460c-ad05-d1e3238d1894", 00:16:08.553 "is_configured": true, 00:16:08.553 "data_offset": 2048, 00:16:08.553 "data_size": 63488 00:16:08.553 } 00:16:08.553 ] 00:16:08.553 }' 00:16:08.553 15:23:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:08.553 15:23:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:08.812 15:23:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:16:08.812 15:23:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:08.812 15:23:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:08.812 15:23:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:08.812 15:23:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:16:08.812 15:23:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:08.812 15:23:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:08.812 15:23:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:08.812 15:23:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.812 15:23:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:08.812 [2024-11-20 15:23:55.182539] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:08.812 15:23:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.812 15:23:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:08.812 "name": "Existed_Raid", 00:16:08.812 "aliases": [ 00:16:08.812 "eeb076a0-2172-4942-b357-7d273dbaed7d" 00:16:08.812 ], 00:16:08.812 "product_name": "Raid Volume", 00:16:08.812 "block_size": 512, 00:16:08.812 "num_blocks": 190464, 00:16:08.812 "uuid": "eeb076a0-2172-4942-b357-7d273dbaed7d", 00:16:08.812 "assigned_rate_limits": { 00:16:08.813 "rw_ios_per_sec": 0, 00:16:08.813 "rw_mbytes_per_sec": 0, 00:16:08.813 "r_mbytes_per_sec": 0, 00:16:08.813 "w_mbytes_per_sec": 0 00:16:08.813 }, 00:16:08.813 "claimed": false, 00:16:08.813 "zoned": false, 00:16:08.813 "supported_io_types": { 00:16:08.813 "read": true, 00:16:08.813 "write": true, 00:16:08.813 "unmap": false, 00:16:08.813 "flush": false, 00:16:08.813 "reset": true, 00:16:08.813 "nvme_admin": false, 00:16:08.813 "nvme_io": false, 00:16:08.813 "nvme_io_md": false, 00:16:08.813 "write_zeroes": true, 00:16:08.813 "zcopy": false, 00:16:08.813 "get_zone_info": false, 00:16:08.813 "zone_management": false, 00:16:08.813 "zone_append": false, 00:16:08.813 "compare": false, 00:16:08.813 "compare_and_write": false, 00:16:08.813 "abort": false, 00:16:08.813 "seek_hole": false, 00:16:08.813 "seek_data": false, 00:16:08.813 "copy": false, 00:16:08.813 "nvme_iov_md": false 00:16:08.813 }, 00:16:08.813 "driver_specific": { 00:16:08.813 "raid": { 00:16:08.813 "uuid": "eeb076a0-2172-4942-b357-7d273dbaed7d", 00:16:08.813 "strip_size_kb": 64, 00:16:08.813 "state": "online", 00:16:08.813 "raid_level": "raid5f", 00:16:08.813 "superblock": true, 00:16:08.813 "num_base_bdevs": 4, 00:16:08.813 "num_base_bdevs_discovered": 4, 00:16:08.813 "num_base_bdevs_operational": 4, 00:16:08.813 "base_bdevs_list": [ 00:16:08.813 { 00:16:08.813 "name": "NewBaseBdev", 00:16:08.813 "uuid": "d497b072-97c0-49d7-8d3e-dc95eaf009fd", 00:16:08.813 "is_configured": true, 00:16:08.813 "data_offset": 2048, 00:16:08.813 "data_size": 63488 00:16:08.813 }, 00:16:08.813 { 00:16:08.813 "name": "BaseBdev2", 00:16:08.813 "uuid": "9ead094b-5af2-447d-8365-d8ae40ca620c", 00:16:08.813 "is_configured": true, 00:16:08.813 "data_offset": 2048, 00:16:08.813 "data_size": 63488 00:16:08.813 }, 00:16:08.813 { 00:16:08.813 "name": "BaseBdev3", 00:16:08.813 "uuid": "9c7cf1c5-e8d4-4b7d-b549-8f419e2c5d1b", 00:16:08.813 "is_configured": true, 00:16:08.813 "data_offset": 2048, 00:16:08.813 "data_size": 63488 00:16:08.813 }, 00:16:08.813 { 00:16:08.813 "name": "BaseBdev4", 00:16:08.813 "uuid": "6c4c33dd-a071-460c-ad05-d1e3238d1894", 00:16:08.813 "is_configured": true, 00:16:08.813 "data_offset": 2048, 00:16:08.813 "data_size": 63488 00:16:08.813 } 00:16:08.813 ] 00:16:08.813 } 00:16:08.813 } 00:16:08.813 }' 00:16:08.813 15:23:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:08.813 15:23:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:16:08.813 BaseBdev2 00:16:08.813 BaseBdev3 00:16:08.813 BaseBdev4' 00:16:08.813 15:23:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:09.072 15:23:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:09.072 15:23:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:09.072 15:23:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:09.072 15:23:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:16:09.072 15:23:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.072 15:23:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:09.072 15:23:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.072 15:23:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:09.072 15:23:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:09.072 15:23:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:09.072 15:23:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:09.072 15:23:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.072 15:23:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:09.072 15:23:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:09.072 15:23:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.072 15:23:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:09.072 15:23:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:09.072 15:23:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:09.072 15:23:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:09.073 15:23:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:09.073 15:23:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.073 15:23:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:09.073 15:23:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.073 15:23:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:09.073 15:23:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:09.073 15:23:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:09.073 15:23:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:16:09.073 15:23:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.073 15:23:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:09.073 15:23:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:09.073 15:23:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.073 15:23:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:09.073 15:23:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:09.073 15:23:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:09.073 15:23:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.073 15:23:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:09.073 [2024-11-20 15:23:55.481893] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:09.073 [2024-11-20 15:23:55.481929] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:09.073 [2024-11-20 15:23:55.482015] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:09.073 [2024-11-20 15:23:55.482313] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:09.073 [2024-11-20 15:23:55.482327] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:16:09.073 15:23:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.073 15:23:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 83256 00:16:09.073 15:23:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 83256 ']' 00:16:09.073 15:23:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 83256 00:16:09.073 15:23:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:16:09.073 15:23:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:09.073 15:23:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83256 00:16:09.073 killing process with pid 83256 00:16:09.073 15:23:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:09.073 15:23:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:09.073 15:23:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83256' 00:16:09.073 15:23:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 83256 00:16:09.073 15:23:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 83256 00:16:09.073 [2024-11-20 15:23:55.530205] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:09.642 [2024-11-20 15:23:55.931828] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:11.021 15:23:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:16:11.021 00:16:11.021 real 0m11.294s 00:16:11.021 user 0m17.771s 00:16:11.021 sys 0m2.343s 00:16:11.021 15:23:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:11.021 15:23:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:11.021 ************************************ 00:16:11.021 END TEST raid5f_state_function_test_sb 00:16:11.021 ************************************ 00:16:11.021 15:23:57 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 4 00:16:11.021 15:23:57 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:16:11.021 15:23:57 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:11.021 15:23:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:11.021 ************************************ 00:16:11.021 START TEST raid5f_superblock_test 00:16:11.021 ************************************ 00:16:11.021 15:23:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid5f 4 00:16:11.021 15:23:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:16:11.021 15:23:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:16:11.021 15:23:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:16:11.021 15:23:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:16:11.021 15:23:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:16:11.021 15:23:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:16:11.021 15:23:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:16:11.021 15:23:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:16:11.021 15:23:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:16:11.021 15:23:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:16:11.021 15:23:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:16:11.021 15:23:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:16:11.021 15:23:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:16:11.021 15:23:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:16:11.021 15:23:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:16:11.021 15:23:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:16:11.021 15:23:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:16:11.021 15:23:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=83922 00:16:11.021 15:23:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 83922 00:16:11.021 15:23:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 83922 ']' 00:16:11.021 15:23:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:11.021 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:11.021 15:23:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:11.021 15:23:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:11.021 15:23:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:11.021 15:23:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:11.021 [2024-11-20 15:23:57.236311] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:16:11.021 [2024-11-20 15:23:57.236688] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83922 ] 00:16:11.021 [2024-11-20 15:23:57.419089] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:11.280 [2024-11-20 15:23:57.541259] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:11.280 [2024-11-20 15:23:57.747739] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:11.280 [2024-11-20 15:23:57.747963] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:11.848 15:23:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:11.848 15:23:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:16:11.848 15:23:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:16:11.848 15:23:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:11.848 15:23:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:16:11.848 15:23:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:16:11.848 15:23:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:16:11.848 15:23:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:11.848 15:23:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:11.848 15:23:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:11.848 15:23:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:16:11.848 15:23:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.848 15:23:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:11.848 malloc1 00:16:11.848 15:23:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.848 15:23:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:11.848 15:23:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.848 15:23:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:11.848 [2024-11-20 15:23:58.127866] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:11.849 [2024-11-20 15:23:58.127936] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:11.849 [2024-11-20 15:23:58.127962] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:11.849 [2024-11-20 15:23:58.127974] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:11.849 [2024-11-20 15:23:58.130472] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:11.849 [2024-11-20 15:23:58.130634] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:11.849 pt1 00:16:11.849 15:23:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.849 15:23:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:11.849 15:23:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:11.849 15:23:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:16:11.849 15:23:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:16:11.849 15:23:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:16:11.849 15:23:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:11.849 15:23:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:11.849 15:23:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:11.849 15:23:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:16:11.849 15:23:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.849 15:23:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:11.849 malloc2 00:16:11.849 15:23:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.849 15:23:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:11.849 15:23:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.849 15:23:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:11.849 [2024-11-20 15:23:58.183339] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:11.849 [2024-11-20 15:23:58.183421] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:11.849 [2024-11-20 15:23:58.183455] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:11.849 [2024-11-20 15:23:58.183466] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:11.849 [2024-11-20 15:23:58.186010] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:11.849 [2024-11-20 15:23:58.186057] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:11.849 pt2 00:16:11.849 15:23:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.849 15:23:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:11.849 15:23:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:11.849 15:23:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:16:11.849 15:23:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:16:11.849 15:23:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:16:11.849 15:23:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:11.849 15:23:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:11.849 15:23:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:11.849 15:23:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:16:11.849 15:23:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.849 15:23:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:11.849 malloc3 00:16:11.849 15:23:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.849 15:23:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:11.849 15:23:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.849 15:23:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:11.849 [2024-11-20 15:23:58.253504] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:11.849 [2024-11-20 15:23:58.253581] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:11.849 [2024-11-20 15:23:58.253608] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:11.849 [2024-11-20 15:23:58.253620] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:11.849 [2024-11-20 15:23:58.256162] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:11.849 [2024-11-20 15:23:58.256210] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:11.849 pt3 00:16:11.849 15:23:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.849 15:23:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:11.849 15:23:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:11.849 15:23:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:16:11.849 15:23:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:16:11.849 15:23:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:16:11.849 15:23:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:11.849 15:23:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:11.849 15:23:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:11.849 15:23:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:16:11.849 15:23:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.849 15:23:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:11.849 malloc4 00:16:11.849 15:23:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.849 15:23:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:16:11.849 15:23:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.849 15:23:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:11.849 [2024-11-20 15:23:58.309232] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:16:11.849 [2024-11-20 15:23:58.309462] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:11.849 [2024-11-20 15:23:58.309496] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:16:11.849 [2024-11-20 15:23:58.309508] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:11.849 [2024-11-20 15:23:58.311985] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:11.849 [2024-11-20 15:23:58.312031] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:16:11.849 pt4 00:16:11.849 15:23:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.849 15:23:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:11.849 15:23:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:11.849 15:23:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:16:11.849 15:23:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.849 15:23:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:11.849 [2024-11-20 15:23:58.321262] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:11.849 [2024-11-20 15:23:58.323479] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:11.849 [2024-11-20 15:23:58.323774] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:11.849 [2024-11-20 15:23:58.323831] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:16:11.849 [2024-11-20 15:23:58.324045] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:11.849 [2024-11-20 15:23:58.324064] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:11.849 [2024-11-20 15:23:58.324374] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:16:12.109 [2024-11-20 15:23:58.332749] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:12.109 [2024-11-20 15:23:58.332799] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:12.109 [2024-11-20 15:23:58.333045] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:12.109 15:23:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.109 15:23:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:16:12.109 15:23:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:12.109 15:23:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:12.109 15:23:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:12.109 15:23:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:12.109 15:23:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:12.109 15:23:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:12.109 15:23:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:12.109 15:23:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:12.109 15:23:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:12.109 15:23:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:12.109 15:23:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:12.109 15:23:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.109 15:23:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:12.109 15:23:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.109 15:23:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:12.109 "name": "raid_bdev1", 00:16:12.109 "uuid": "e6a99462-d922-46b3-a0d2-44f2e8dca22f", 00:16:12.109 "strip_size_kb": 64, 00:16:12.109 "state": "online", 00:16:12.109 "raid_level": "raid5f", 00:16:12.109 "superblock": true, 00:16:12.109 "num_base_bdevs": 4, 00:16:12.109 "num_base_bdevs_discovered": 4, 00:16:12.109 "num_base_bdevs_operational": 4, 00:16:12.109 "base_bdevs_list": [ 00:16:12.109 { 00:16:12.109 "name": "pt1", 00:16:12.109 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:12.109 "is_configured": true, 00:16:12.109 "data_offset": 2048, 00:16:12.109 "data_size": 63488 00:16:12.109 }, 00:16:12.109 { 00:16:12.109 "name": "pt2", 00:16:12.109 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:12.109 "is_configured": true, 00:16:12.109 "data_offset": 2048, 00:16:12.109 "data_size": 63488 00:16:12.109 }, 00:16:12.109 { 00:16:12.109 "name": "pt3", 00:16:12.109 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:12.109 "is_configured": true, 00:16:12.109 "data_offset": 2048, 00:16:12.109 "data_size": 63488 00:16:12.109 }, 00:16:12.109 { 00:16:12.109 "name": "pt4", 00:16:12.109 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:12.109 "is_configured": true, 00:16:12.109 "data_offset": 2048, 00:16:12.109 "data_size": 63488 00:16:12.109 } 00:16:12.109 ] 00:16:12.109 }' 00:16:12.109 15:23:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:12.109 15:23:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:12.369 15:23:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:16:12.369 15:23:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:12.369 15:23:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:12.369 15:23:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:12.369 15:23:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:12.369 15:23:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:12.369 15:23:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:12.369 15:23:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:12.369 15:23:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.369 15:23:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:12.369 [2024-11-20 15:23:58.781837] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:12.369 15:23:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.369 15:23:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:12.369 "name": "raid_bdev1", 00:16:12.369 "aliases": [ 00:16:12.369 "e6a99462-d922-46b3-a0d2-44f2e8dca22f" 00:16:12.369 ], 00:16:12.369 "product_name": "Raid Volume", 00:16:12.369 "block_size": 512, 00:16:12.369 "num_blocks": 190464, 00:16:12.369 "uuid": "e6a99462-d922-46b3-a0d2-44f2e8dca22f", 00:16:12.369 "assigned_rate_limits": { 00:16:12.369 "rw_ios_per_sec": 0, 00:16:12.369 "rw_mbytes_per_sec": 0, 00:16:12.369 "r_mbytes_per_sec": 0, 00:16:12.369 "w_mbytes_per_sec": 0 00:16:12.369 }, 00:16:12.369 "claimed": false, 00:16:12.369 "zoned": false, 00:16:12.369 "supported_io_types": { 00:16:12.369 "read": true, 00:16:12.369 "write": true, 00:16:12.369 "unmap": false, 00:16:12.369 "flush": false, 00:16:12.369 "reset": true, 00:16:12.369 "nvme_admin": false, 00:16:12.369 "nvme_io": false, 00:16:12.369 "nvme_io_md": false, 00:16:12.369 "write_zeroes": true, 00:16:12.369 "zcopy": false, 00:16:12.369 "get_zone_info": false, 00:16:12.369 "zone_management": false, 00:16:12.369 "zone_append": false, 00:16:12.369 "compare": false, 00:16:12.369 "compare_and_write": false, 00:16:12.369 "abort": false, 00:16:12.369 "seek_hole": false, 00:16:12.369 "seek_data": false, 00:16:12.369 "copy": false, 00:16:12.369 "nvme_iov_md": false 00:16:12.369 }, 00:16:12.369 "driver_specific": { 00:16:12.369 "raid": { 00:16:12.369 "uuid": "e6a99462-d922-46b3-a0d2-44f2e8dca22f", 00:16:12.369 "strip_size_kb": 64, 00:16:12.369 "state": "online", 00:16:12.369 "raid_level": "raid5f", 00:16:12.369 "superblock": true, 00:16:12.369 "num_base_bdevs": 4, 00:16:12.369 "num_base_bdevs_discovered": 4, 00:16:12.369 "num_base_bdevs_operational": 4, 00:16:12.369 "base_bdevs_list": [ 00:16:12.369 { 00:16:12.369 "name": "pt1", 00:16:12.369 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:12.369 "is_configured": true, 00:16:12.369 "data_offset": 2048, 00:16:12.369 "data_size": 63488 00:16:12.369 }, 00:16:12.369 { 00:16:12.369 "name": "pt2", 00:16:12.369 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:12.369 "is_configured": true, 00:16:12.369 "data_offset": 2048, 00:16:12.369 "data_size": 63488 00:16:12.369 }, 00:16:12.369 { 00:16:12.369 "name": "pt3", 00:16:12.369 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:12.369 "is_configured": true, 00:16:12.369 "data_offset": 2048, 00:16:12.369 "data_size": 63488 00:16:12.369 }, 00:16:12.369 { 00:16:12.369 "name": "pt4", 00:16:12.369 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:12.369 "is_configured": true, 00:16:12.369 "data_offset": 2048, 00:16:12.369 "data_size": 63488 00:16:12.369 } 00:16:12.369 ] 00:16:12.369 } 00:16:12.369 } 00:16:12.369 }' 00:16:12.369 15:23:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:12.629 15:23:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:12.629 pt2 00:16:12.629 pt3 00:16:12.629 pt4' 00:16:12.629 15:23:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:12.629 15:23:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:12.629 15:23:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:12.629 15:23:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:12.629 15:23:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:12.629 15:23:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.629 15:23:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:12.629 15:23:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.629 15:23:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:12.629 15:23:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:12.629 15:23:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:12.629 15:23:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:12.629 15:23:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.629 15:23:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:12.629 15:23:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:12.629 15:23:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.629 15:23:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:12.629 15:23:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:12.629 15:23:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:12.629 15:23:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:12.629 15:23:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:16:12.629 15:23:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.629 15:23:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:12.629 15:23:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.629 15:23:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:12.629 15:23:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:12.629 15:23:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:12.629 15:23:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:12.629 15:23:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:16:12.629 15:23:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.629 15:23:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:12.629 15:23:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.629 15:23:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:12.629 15:23:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:12.629 15:23:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:16:12.629 15:23:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:12.629 15:23:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.629 15:23:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:12.889 [2024-11-20 15:23:59.109299] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:12.889 15:23:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.889 15:23:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=e6a99462-d922-46b3-a0d2-44f2e8dca22f 00:16:12.889 15:23:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z e6a99462-d922-46b3-a0d2-44f2e8dca22f ']' 00:16:12.889 15:23:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:12.889 15:23:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.889 15:23:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:12.889 [2024-11-20 15:23:59.157076] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:12.889 [2024-11-20 15:23:59.157267] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:12.889 [2024-11-20 15:23:59.157525] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:12.889 [2024-11-20 15:23:59.157701] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:12.889 [2024-11-20 15:23:59.157830] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:12.889 15:23:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.889 15:23:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:16:12.889 15:23:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:12.889 15:23:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.889 15:23:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:12.889 15:23:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.889 15:23:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:16:12.889 15:23:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:16:12.889 15:23:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:12.889 15:23:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:16:12.889 15:23:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.889 15:23:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:12.889 15:23:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.889 15:23:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:12.889 15:23:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:16:12.889 15:23:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.889 15:23:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:12.889 15:23:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.889 15:23:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:12.889 15:23:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:16:12.889 15:23:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.889 15:23:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:12.889 15:23:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.889 15:23:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:12.889 15:23:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:16:12.889 15:23:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.889 15:23:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:12.889 15:23:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.889 15:23:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:16:12.889 15:23:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.889 15:23:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:12.889 15:23:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:16:12.889 15:23:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.889 15:23:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:16:12.889 15:23:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:16:12.889 15:23:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:16:12.889 15:23:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:16:12.889 15:23:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:16:12.889 15:23:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:12.889 15:23:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:16:12.889 15:23:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:12.889 15:23:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:16:12.889 15:23:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.889 15:23:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:12.889 [2024-11-20 15:23:59.300890] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:16:12.889 [2024-11-20 15:23:59.303042] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:16:12.889 [2024-11-20 15:23:59.303248] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:16:12.889 [2024-11-20 15:23:59.303300] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:16:12.889 [2024-11-20 15:23:59.303360] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:16:12.889 [2024-11-20 15:23:59.303424] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:16:12.889 [2024-11-20 15:23:59.303468] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:16:12.889 [2024-11-20 15:23:59.303494] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:16:12.889 [2024-11-20 15:23:59.303513] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:12.889 [2024-11-20 15:23:59.303528] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:16:12.889 request: 00:16:12.889 { 00:16:12.889 "name": "raid_bdev1", 00:16:12.889 "raid_level": "raid5f", 00:16:12.889 "base_bdevs": [ 00:16:12.889 "malloc1", 00:16:12.889 "malloc2", 00:16:12.889 "malloc3", 00:16:12.889 "malloc4" 00:16:12.889 ], 00:16:12.889 "strip_size_kb": 64, 00:16:12.890 "superblock": false, 00:16:12.890 "method": "bdev_raid_create", 00:16:12.890 "req_id": 1 00:16:12.890 } 00:16:12.890 Got JSON-RPC error response 00:16:12.890 response: 00:16:12.890 { 00:16:12.890 "code": -17, 00:16:12.890 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:16:12.890 } 00:16:12.890 15:23:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:16:12.890 15:23:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:16:12.890 15:23:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:12.890 15:23:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:12.890 15:23:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:12.890 15:23:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:12.890 15:23:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.890 15:23:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:12.890 15:23:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:16:12.890 15:23:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.890 15:23:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:16:12.890 15:23:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:16:12.890 15:23:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:12.890 15:23:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.890 15:23:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:13.149 [2024-11-20 15:23:59.368821] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:13.149 [2024-11-20 15:23:59.368903] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:13.149 [2024-11-20 15:23:59.368925] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:16:13.149 [2024-11-20 15:23:59.368939] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:13.149 [2024-11-20 15:23:59.371479] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:13.149 [2024-11-20 15:23:59.371533] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:13.149 [2024-11-20 15:23:59.371625] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:13.149 [2024-11-20 15:23:59.371697] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:13.149 pt1 00:16:13.149 15:23:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.149 15:23:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:16:13.149 15:23:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:13.149 15:23:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:13.149 15:23:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:13.149 15:23:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:13.149 15:23:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:13.149 15:23:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:13.149 15:23:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:13.149 15:23:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:13.149 15:23:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:13.149 15:23:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:13.149 15:23:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:13.149 15:23:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.149 15:23:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:13.150 15:23:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.150 15:23:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:13.150 "name": "raid_bdev1", 00:16:13.150 "uuid": "e6a99462-d922-46b3-a0d2-44f2e8dca22f", 00:16:13.150 "strip_size_kb": 64, 00:16:13.150 "state": "configuring", 00:16:13.150 "raid_level": "raid5f", 00:16:13.150 "superblock": true, 00:16:13.150 "num_base_bdevs": 4, 00:16:13.150 "num_base_bdevs_discovered": 1, 00:16:13.150 "num_base_bdevs_operational": 4, 00:16:13.150 "base_bdevs_list": [ 00:16:13.150 { 00:16:13.150 "name": "pt1", 00:16:13.150 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:13.150 "is_configured": true, 00:16:13.150 "data_offset": 2048, 00:16:13.150 "data_size": 63488 00:16:13.150 }, 00:16:13.150 { 00:16:13.150 "name": null, 00:16:13.150 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:13.150 "is_configured": false, 00:16:13.150 "data_offset": 2048, 00:16:13.150 "data_size": 63488 00:16:13.150 }, 00:16:13.150 { 00:16:13.150 "name": null, 00:16:13.150 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:13.150 "is_configured": false, 00:16:13.150 "data_offset": 2048, 00:16:13.150 "data_size": 63488 00:16:13.150 }, 00:16:13.150 { 00:16:13.150 "name": null, 00:16:13.150 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:13.150 "is_configured": false, 00:16:13.150 "data_offset": 2048, 00:16:13.150 "data_size": 63488 00:16:13.150 } 00:16:13.150 ] 00:16:13.150 }' 00:16:13.150 15:23:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:13.150 15:23:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:13.409 15:23:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:16:13.409 15:23:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:13.409 15:23:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.409 15:23:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:13.409 [2024-11-20 15:23:59.772734] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:13.409 [2024-11-20 15:23:59.772994] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:13.409 [2024-11-20 15:23:59.773072] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:16:13.409 [2024-11-20 15:23:59.773196] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:13.409 [2024-11-20 15:23:59.773720] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:13.409 [2024-11-20 15:23:59.773868] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:13.409 [2024-11-20 15:23:59.773978] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:13.409 [2024-11-20 15:23:59.774008] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:13.409 pt2 00:16:13.409 15:23:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.409 15:23:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:16:13.409 15:23:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.409 15:23:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:13.409 [2024-11-20 15:23:59.780745] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:16:13.409 15:23:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.409 15:23:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:16:13.409 15:23:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:13.409 15:23:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:13.409 15:23:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:13.409 15:23:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:13.409 15:23:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:13.409 15:23:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:13.409 15:23:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:13.409 15:23:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:13.409 15:23:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:13.409 15:23:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:13.409 15:23:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:13.409 15:23:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.409 15:23:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:13.409 15:23:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.409 15:23:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:13.409 "name": "raid_bdev1", 00:16:13.409 "uuid": "e6a99462-d922-46b3-a0d2-44f2e8dca22f", 00:16:13.409 "strip_size_kb": 64, 00:16:13.409 "state": "configuring", 00:16:13.409 "raid_level": "raid5f", 00:16:13.409 "superblock": true, 00:16:13.409 "num_base_bdevs": 4, 00:16:13.409 "num_base_bdevs_discovered": 1, 00:16:13.409 "num_base_bdevs_operational": 4, 00:16:13.409 "base_bdevs_list": [ 00:16:13.409 { 00:16:13.409 "name": "pt1", 00:16:13.409 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:13.409 "is_configured": true, 00:16:13.409 "data_offset": 2048, 00:16:13.409 "data_size": 63488 00:16:13.409 }, 00:16:13.409 { 00:16:13.409 "name": null, 00:16:13.409 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:13.409 "is_configured": false, 00:16:13.409 "data_offset": 0, 00:16:13.409 "data_size": 63488 00:16:13.409 }, 00:16:13.409 { 00:16:13.409 "name": null, 00:16:13.409 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:13.409 "is_configured": false, 00:16:13.409 "data_offset": 2048, 00:16:13.409 "data_size": 63488 00:16:13.409 }, 00:16:13.409 { 00:16:13.409 "name": null, 00:16:13.409 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:13.409 "is_configured": false, 00:16:13.410 "data_offset": 2048, 00:16:13.410 "data_size": 63488 00:16:13.410 } 00:16:13.410 ] 00:16:13.410 }' 00:16:13.410 15:23:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:13.410 15:23:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:13.979 15:24:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:16:13.979 15:24:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:13.979 15:24:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:13.979 15:24:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.979 15:24:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:13.979 [2024-11-20 15:24:00.212112] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:13.979 [2024-11-20 15:24:00.212369] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:13.979 [2024-11-20 15:24:00.212419] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:16:13.979 [2024-11-20 15:24:00.212432] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:13.979 [2024-11-20 15:24:00.212943] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:13.979 [2024-11-20 15:24:00.212965] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:13.979 [2024-11-20 15:24:00.213058] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:13.979 [2024-11-20 15:24:00.213080] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:13.979 pt2 00:16:13.979 15:24:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.979 15:24:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:13.979 15:24:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:13.979 15:24:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:13.979 15:24:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.979 15:24:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:13.979 [2024-11-20 15:24:00.224087] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:13.979 [2024-11-20 15:24:00.224162] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:13.979 [2024-11-20 15:24:00.224192] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:16:13.979 [2024-11-20 15:24:00.224206] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:13.979 [2024-11-20 15:24:00.224695] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:13.979 [2024-11-20 15:24:00.224716] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:13.979 [2024-11-20 15:24:00.224802] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:16:13.979 [2024-11-20 15:24:00.224831] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:13.979 pt3 00:16:13.979 15:24:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.979 15:24:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:13.979 15:24:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:13.979 15:24:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:16:13.979 15:24:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.979 15:24:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:13.979 [2024-11-20 15:24:00.236043] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:16:13.979 [2024-11-20 15:24:00.236115] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:13.979 [2024-11-20 15:24:00.236138] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:16:13.979 [2024-11-20 15:24:00.236149] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:13.979 [2024-11-20 15:24:00.236650] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:13.979 [2024-11-20 15:24:00.236693] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:16:13.979 [2024-11-20 15:24:00.236786] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:16:13.979 [2024-11-20 15:24:00.236815] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:16:13.979 [2024-11-20 15:24:00.236977] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:13.979 [2024-11-20 15:24:00.236987] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:13.979 [2024-11-20 15:24:00.237254] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:16:13.979 [2024-11-20 15:24:00.244910] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:13.979 [2024-11-20 15:24:00.244953] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:16:13.979 [2024-11-20 15:24:00.245188] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:13.979 pt4 00:16:13.979 15:24:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.979 15:24:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:13.979 15:24:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:13.979 15:24:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:16:13.979 15:24:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:13.979 15:24:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:13.979 15:24:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:13.979 15:24:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:13.979 15:24:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:13.979 15:24:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:13.979 15:24:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:13.979 15:24:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:13.979 15:24:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:13.979 15:24:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:13.979 15:24:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.979 15:24:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:13.979 15:24:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:13.979 15:24:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.979 15:24:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:13.979 "name": "raid_bdev1", 00:16:13.979 "uuid": "e6a99462-d922-46b3-a0d2-44f2e8dca22f", 00:16:13.979 "strip_size_kb": 64, 00:16:13.979 "state": "online", 00:16:13.979 "raid_level": "raid5f", 00:16:13.979 "superblock": true, 00:16:13.979 "num_base_bdevs": 4, 00:16:13.979 "num_base_bdevs_discovered": 4, 00:16:13.979 "num_base_bdevs_operational": 4, 00:16:13.979 "base_bdevs_list": [ 00:16:13.979 { 00:16:13.979 "name": "pt1", 00:16:13.979 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:13.979 "is_configured": true, 00:16:13.979 "data_offset": 2048, 00:16:13.979 "data_size": 63488 00:16:13.979 }, 00:16:13.979 { 00:16:13.979 "name": "pt2", 00:16:13.979 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:13.979 "is_configured": true, 00:16:13.979 "data_offset": 2048, 00:16:13.979 "data_size": 63488 00:16:13.979 }, 00:16:13.979 { 00:16:13.979 "name": "pt3", 00:16:13.979 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:13.979 "is_configured": true, 00:16:13.979 "data_offset": 2048, 00:16:13.979 "data_size": 63488 00:16:13.979 }, 00:16:13.979 { 00:16:13.979 "name": "pt4", 00:16:13.980 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:13.980 "is_configured": true, 00:16:13.980 "data_offset": 2048, 00:16:13.980 "data_size": 63488 00:16:13.980 } 00:16:13.980 ] 00:16:13.980 }' 00:16:13.980 15:24:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:13.980 15:24:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.251 15:24:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:16:14.251 15:24:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:14.251 15:24:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:14.251 15:24:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:14.251 15:24:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:14.251 15:24:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:14.251 15:24:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:14.251 15:24:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:14.251 15:24:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.251 15:24:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.251 [2024-11-20 15:24:00.689472] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:14.512 15:24:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.512 15:24:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:14.512 "name": "raid_bdev1", 00:16:14.512 "aliases": [ 00:16:14.512 "e6a99462-d922-46b3-a0d2-44f2e8dca22f" 00:16:14.512 ], 00:16:14.512 "product_name": "Raid Volume", 00:16:14.512 "block_size": 512, 00:16:14.512 "num_blocks": 190464, 00:16:14.512 "uuid": "e6a99462-d922-46b3-a0d2-44f2e8dca22f", 00:16:14.512 "assigned_rate_limits": { 00:16:14.512 "rw_ios_per_sec": 0, 00:16:14.512 "rw_mbytes_per_sec": 0, 00:16:14.512 "r_mbytes_per_sec": 0, 00:16:14.512 "w_mbytes_per_sec": 0 00:16:14.512 }, 00:16:14.512 "claimed": false, 00:16:14.512 "zoned": false, 00:16:14.512 "supported_io_types": { 00:16:14.512 "read": true, 00:16:14.512 "write": true, 00:16:14.512 "unmap": false, 00:16:14.512 "flush": false, 00:16:14.512 "reset": true, 00:16:14.512 "nvme_admin": false, 00:16:14.512 "nvme_io": false, 00:16:14.512 "nvme_io_md": false, 00:16:14.512 "write_zeroes": true, 00:16:14.512 "zcopy": false, 00:16:14.512 "get_zone_info": false, 00:16:14.512 "zone_management": false, 00:16:14.512 "zone_append": false, 00:16:14.512 "compare": false, 00:16:14.512 "compare_and_write": false, 00:16:14.512 "abort": false, 00:16:14.512 "seek_hole": false, 00:16:14.512 "seek_data": false, 00:16:14.512 "copy": false, 00:16:14.512 "nvme_iov_md": false 00:16:14.512 }, 00:16:14.512 "driver_specific": { 00:16:14.512 "raid": { 00:16:14.512 "uuid": "e6a99462-d922-46b3-a0d2-44f2e8dca22f", 00:16:14.512 "strip_size_kb": 64, 00:16:14.512 "state": "online", 00:16:14.512 "raid_level": "raid5f", 00:16:14.512 "superblock": true, 00:16:14.512 "num_base_bdevs": 4, 00:16:14.512 "num_base_bdevs_discovered": 4, 00:16:14.512 "num_base_bdevs_operational": 4, 00:16:14.512 "base_bdevs_list": [ 00:16:14.512 { 00:16:14.512 "name": "pt1", 00:16:14.512 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:14.512 "is_configured": true, 00:16:14.512 "data_offset": 2048, 00:16:14.512 "data_size": 63488 00:16:14.512 }, 00:16:14.512 { 00:16:14.512 "name": "pt2", 00:16:14.512 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:14.512 "is_configured": true, 00:16:14.512 "data_offset": 2048, 00:16:14.512 "data_size": 63488 00:16:14.512 }, 00:16:14.512 { 00:16:14.512 "name": "pt3", 00:16:14.512 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:14.512 "is_configured": true, 00:16:14.512 "data_offset": 2048, 00:16:14.512 "data_size": 63488 00:16:14.512 }, 00:16:14.512 { 00:16:14.512 "name": "pt4", 00:16:14.512 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:14.512 "is_configured": true, 00:16:14.512 "data_offset": 2048, 00:16:14.512 "data_size": 63488 00:16:14.512 } 00:16:14.512 ] 00:16:14.512 } 00:16:14.512 } 00:16:14.512 }' 00:16:14.512 15:24:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:14.512 15:24:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:14.512 pt2 00:16:14.512 pt3 00:16:14.512 pt4' 00:16:14.512 15:24:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:14.512 15:24:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:14.512 15:24:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:14.512 15:24:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:14.512 15:24:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.512 15:24:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.512 15:24:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:14.512 15:24:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.512 15:24:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:14.512 15:24:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:14.512 15:24:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:14.512 15:24:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:14.512 15:24:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:14.512 15:24:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.512 15:24:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.512 15:24:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.512 15:24:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:14.512 15:24:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:14.512 15:24:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:14.512 15:24:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:16:14.512 15:24:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:14.512 15:24:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.512 15:24:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.512 15:24:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.512 15:24:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:14.512 15:24:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:14.512 15:24:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:14.512 15:24:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:16:14.512 15:24:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.512 15:24:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.512 15:24:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:14.512 15:24:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.512 15:24:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:14.512 15:24:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:14.771 15:24:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:14.771 15:24:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:16:14.771 15:24:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.771 15:24:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.771 [2024-11-20 15:24:01.000989] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:14.771 15:24:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.771 15:24:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' e6a99462-d922-46b3-a0d2-44f2e8dca22f '!=' e6a99462-d922-46b3-a0d2-44f2e8dca22f ']' 00:16:14.771 15:24:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:16:14.771 15:24:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:14.771 15:24:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:16:14.771 15:24:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:16:14.771 15:24:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.771 15:24:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.771 [2024-11-20 15:24:01.040859] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:16:14.771 15:24:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.771 15:24:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:14.771 15:24:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:14.772 15:24:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:14.772 15:24:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:14.772 15:24:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:14.772 15:24:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:14.772 15:24:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:14.772 15:24:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:14.772 15:24:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:14.772 15:24:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:14.772 15:24:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:14.772 15:24:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:14.772 15:24:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.772 15:24:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.772 15:24:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.772 15:24:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:14.772 "name": "raid_bdev1", 00:16:14.772 "uuid": "e6a99462-d922-46b3-a0d2-44f2e8dca22f", 00:16:14.772 "strip_size_kb": 64, 00:16:14.772 "state": "online", 00:16:14.772 "raid_level": "raid5f", 00:16:14.772 "superblock": true, 00:16:14.772 "num_base_bdevs": 4, 00:16:14.772 "num_base_bdevs_discovered": 3, 00:16:14.772 "num_base_bdevs_operational": 3, 00:16:14.772 "base_bdevs_list": [ 00:16:14.772 { 00:16:14.772 "name": null, 00:16:14.772 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:14.772 "is_configured": false, 00:16:14.772 "data_offset": 0, 00:16:14.772 "data_size": 63488 00:16:14.772 }, 00:16:14.772 { 00:16:14.772 "name": "pt2", 00:16:14.772 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:14.772 "is_configured": true, 00:16:14.772 "data_offset": 2048, 00:16:14.772 "data_size": 63488 00:16:14.772 }, 00:16:14.772 { 00:16:14.772 "name": "pt3", 00:16:14.772 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:14.772 "is_configured": true, 00:16:14.772 "data_offset": 2048, 00:16:14.772 "data_size": 63488 00:16:14.772 }, 00:16:14.772 { 00:16:14.772 "name": "pt4", 00:16:14.772 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:14.772 "is_configured": true, 00:16:14.772 "data_offset": 2048, 00:16:14.772 "data_size": 63488 00:16:14.772 } 00:16:14.772 ] 00:16:14.772 }' 00:16:14.772 15:24:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:14.772 15:24:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:15.030 15:24:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:15.030 15:24:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.030 15:24:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:15.030 [2024-11-20 15:24:01.444227] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:15.030 [2024-11-20 15:24:01.444264] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:15.030 [2024-11-20 15:24:01.444346] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:15.030 [2024-11-20 15:24:01.444427] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:15.030 [2024-11-20 15:24:01.444438] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:16:15.030 15:24:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.030 15:24:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:15.030 15:24:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.030 15:24:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:15.030 15:24:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:16:15.030 15:24:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.030 15:24:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:16:15.030 15:24:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:16:15.030 15:24:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:16:15.030 15:24:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:15.030 15:24:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:16:15.030 15:24:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.030 15:24:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:15.030 15:24:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.031 15:24:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:16:15.031 15:24:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:15.031 15:24:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:16:15.031 15:24:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.031 15:24:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:15.290 15:24:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.290 15:24:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:16:15.290 15:24:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:15.290 15:24:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:16:15.290 15:24:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.290 15:24:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:15.290 15:24:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.290 15:24:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:16:15.290 15:24:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:15.290 15:24:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:16:15.290 15:24:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:16:15.290 15:24:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:15.290 15:24:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.290 15:24:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:15.290 [2024-11-20 15:24:01.528111] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:15.290 [2024-11-20 15:24:01.528189] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:15.290 [2024-11-20 15:24:01.528212] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:16:15.290 [2024-11-20 15:24:01.528223] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:15.290 [2024-11-20 15:24:01.530744] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:15.290 [2024-11-20 15:24:01.530788] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:15.290 [2024-11-20 15:24:01.530886] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:15.290 [2024-11-20 15:24:01.530938] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:15.290 pt2 00:16:15.290 15:24:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.290 15:24:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:16:15.290 15:24:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:15.290 15:24:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:15.290 15:24:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:15.290 15:24:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:15.290 15:24:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:15.290 15:24:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:15.290 15:24:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:15.290 15:24:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:15.290 15:24:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:15.290 15:24:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:15.290 15:24:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.290 15:24:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:15.290 15:24:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:15.290 15:24:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.290 15:24:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:15.290 "name": "raid_bdev1", 00:16:15.290 "uuid": "e6a99462-d922-46b3-a0d2-44f2e8dca22f", 00:16:15.290 "strip_size_kb": 64, 00:16:15.290 "state": "configuring", 00:16:15.290 "raid_level": "raid5f", 00:16:15.290 "superblock": true, 00:16:15.290 "num_base_bdevs": 4, 00:16:15.290 "num_base_bdevs_discovered": 1, 00:16:15.290 "num_base_bdevs_operational": 3, 00:16:15.290 "base_bdevs_list": [ 00:16:15.290 { 00:16:15.290 "name": null, 00:16:15.290 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:15.290 "is_configured": false, 00:16:15.290 "data_offset": 2048, 00:16:15.290 "data_size": 63488 00:16:15.290 }, 00:16:15.290 { 00:16:15.290 "name": "pt2", 00:16:15.290 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:15.290 "is_configured": true, 00:16:15.290 "data_offset": 2048, 00:16:15.290 "data_size": 63488 00:16:15.290 }, 00:16:15.290 { 00:16:15.290 "name": null, 00:16:15.290 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:15.290 "is_configured": false, 00:16:15.290 "data_offset": 2048, 00:16:15.290 "data_size": 63488 00:16:15.290 }, 00:16:15.290 { 00:16:15.290 "name": null, 00:16:15.290 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:15.290 "is_configured": false, 00:16:15.290 "data_offset": 2048, 00:16:15.290 "data_size": 63488 00:16:15.290 } 00:16:15.290 ] 00:16:15.290 }' 00:16:15.290 15:24:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:15.290 15:24:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:15.550 15:24:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:16:15.550 15:24:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:16:15.550 15:24:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:15.550 15:24:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.550 15:24:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:15.550 [2024-11-20 15:24:01.964726] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:15.550 [2024-11-20 15:24:01.964832] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:15.550 [2024-11-20 15:24:01.964860] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:16:15.550 [2024-11-20 15:24:01.964872] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:15.550 [2024-11-20 15:24:01.965339] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:15.550 [2024-11-20 15:24:01.965368] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:15.550 [2024-11-20 15:24:01.965458] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:16:15.550 [2024-11-20 15:24:01.965482] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:15.550 pt3 00:16:15.550 15:24:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.550 15:24:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:16:15.550 15:24:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:15.550 15:24:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:15.550 15:24:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:15.550 15:24:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:15.550 15:24:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:15.550 15:24:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:15.550 15:24:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:15.550 15:24:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:15.550 15:24:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:15.550 15:24:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:15.550 15:24:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:15.550 15:24:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.550 15:24:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:15.550 15:24:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.550 15:24:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:15.550 "name": "raid_bdev1", 00:16:15.550 "uuid": "e6a99462-d922-46b3-a0d2-44f2e8dca22f", 00:16:15.550 "strip_size_kb": 64, 00:16:15.550 "state": "configuring", 00:16:15.550 "raid_level": "raid5f", 00:16:15.550 "superblock": true, 00:16:15.550 "num_base_bdevs": 4, 00:16:15.550 "num_base_bdevs_discovered": 2, 00:16:15.550 "num_base_bdevs_operational": 3, 00:16:15.550 "base_bdevs_list": [ 00:16:15.550 { 00:16:15.550 "name": null, 00:16:15.550 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:15.550 "is_configured": false, 00:16:15.550 "data_offset": 2048, 00:16:15.550 "data_size": 63488 00:16:15.550 }, 00:16:15.550 { 00:16:15.550 "name": "pt2", 00:16:15.550 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:15.550 "is_configured": true, 00:16:15.550 "data_offset": 2048, 00:16:15.550 "data_size": 63488 00:16:15.550 }, 00:16:15.550 { 00:16:15.550 "name": "pt3", 00:16:15.550 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:15.550 "is_configured": true, 00:16:15.550 "data_offset": 2048, 00:16:15.550 "data_size": 63488 00:16:15.550 }, 00:16:15.550 { 00:16:15.550 "name": null, 00:16:15.550 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:15.550 "is_configured": false, 00:16:15.551 "data_offset": 2048, 00:16:15.551 "data_size": 63488 00:16:15.551 } 00:16:15.551 ] 00:16:15.551 }' 00:16:15.551 15:24:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:15.551 15:24:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.119 15:24:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:16:16.119 15:24:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:16:16.119 15:24:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:16:16.119 15:24:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:16:16.119 15:24:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.119 15:24:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.119 [2024-11-20 15:24:02.384113] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:16:16.119 [2024-11-20 15:24:02.384191] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:16.119 [2024-11-20 15:24:02.384216] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:16:16.119 [2024-11-20 15:24:02.384228] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:16.119 [2024-11-20 15:24:02.384697] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:16.119 [2024-11-20 15:24:02.384734] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:16:16.119 [2024-11-20 15:24:02.384819] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:16:16.119 [2024-11-20 15:24:02.384849] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:16:16.120 [2024-11-20 15:24:02.384980] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:16:16.120 [2024-11-20 15:24:02.384998] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:16.120 [2024-11-20 15:24:02.385279] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:16:16.120 [2024-11-20 15:24:02.392108] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:16:16.120 [2024-11-20 15:24:02.392168] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:16:16.120 [2024-11-20 15:24:02.392547] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:16.120 pt4 00:16:16.120 15:24:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.120 15:24:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:16.120 15:24:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:16.120 15:24:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:16.120 15:24:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:16.120 15:24:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:16.120 15:24:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:16.120 15:24:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:16.120 15:24:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:16.120 15:24:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:16.120 15:24:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:16.120 15:24:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:16.120 15:24:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:16.120 15:24:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.120 15:24:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.120 15:24:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.120 15:24:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:16.120 "name": "raid_bdev1", 00:16:16.120 "uuid": "e6a99462-d922-46b3-a0d2-44f2e8dca22f", 00:16:16.120 "strip_size_kb": 64, 00:16:16.120 "state": "online", 00:16:16.120 "raid_level": "raid5f", 00:16:16.120 "superblock": true, 00:16:16.120 "num_base_bdevs": 4, 00:16:16.120 "num_base_bdevs_discovered": 3, 00:16:16.120 "num_base_bdevs_operational": 3, 00:16:16.120 "base_bdevs_list": [ 00:16:16.120 { 00:16:16.120 "name": null, 00:16:16.120 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:16.120 "is_configured": false, 00:16:16.120 "data_offset": 2048, 00:16:16.120 "data_size": 63488 00:16:16.120 }, 00:16:16.120 { 00:16:16.120 "name": "pt2", 00:16:16.120 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:16.120 "is_configured": true, 00:16:16.120 "data_offset": 2048, 00:16:16.120 "data_size": 63488 00:16:16.120 }, 00:16:16.120 { 00:16:16.120 "name": "pt3", 00:16:16.120 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:16.120 "is_configured": true, 00:16:16.120 "data_offset": 2048, 00:16:16.120 "data_size": 63488 00:16:16.120 }, 00:16:16.120 { 00:16:16.120 "name": "pt4", 00:16:16.120 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:16.120 "is_configured": true, 00:16:16.120 "data_offset": 2048, 00:16:16.120 "data_size": 63488 00:16:16.120 } 00:16:16.120 ] 00:16:16.120 }' 00:16:16.120 15:24:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:16.120 15:24:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.379 15:24:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:16.379 15:24:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.379 15:24:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.379 [2024-11-20 15:24:02.821342] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:16.379 [2024-11-20 15:24:02.821387] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:16.379 [2024-11-20 15:24:02.821470] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:16.379 [2024-11-20 15:24:02.821549] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:16.379 [2024-11-20 15:24:02.821564] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:16:16.379 15:24:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.379 15:24:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:16.379 15:24:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:16:16.379 15:24:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.379 15:24:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.379 15:24:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.639 15:24:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:16:16.639 15:24:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:16:16.639 15:24:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:16:16.639 15:24:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:16:16.639 15:24:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:16:16.639 15:24:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.639 15:24:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.639 15:24:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.639 15:24:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:16.639 15:24:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.639 15:24:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.639 [2024-11-20 15:24:02.885253] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:16.639 [2024-11-20 15:24:02.885344] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:16.639 [2024-11-20 15:24:02.885374] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:16:16.639 [2024-11-20 15:24:02.885394] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:16.639 [2024-11-20 15:24:02.888237] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:16.639 [2024-11-20 15:24:02.888299] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:16.639 [2024-11-20 15:24:02.888404] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:16.639 [2024-11-20 15:24:02.888466] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:16.639 [2024-11-20 15:24:02.888605] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:16:16.639 [2024-11-20 15:24:02.888630] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:16.639 [2024-11-20 15:24:02.888649] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:16:16.639 [2024-11-20 15:24:02.888745] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:16.639 [2024-11-20 15:24:02.888859] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:16.639 pt1 00:16:16.639 15:24:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.639 15:24:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:16:16.639 15:24:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:16:16.639 15:24:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:16.639 15:24:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:16.639 15:24:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:16.639 15:24:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:16.639 15:24:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:16.639 15:24:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:16.639 15:24:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:16.639 15:24:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:16.639 15:24:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:16.639 15:24:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:16.639 15:24:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.639 15:24:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.639 15:24:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:16.639 15:24:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.639 15:24:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:16.639 "name": "raid_bdev1", 00:16:16.639 "uuid": "e6a99462-d922-46b3-a0d2-44f2e8dca22f", 00:16:16.639 "strip_size_kb": 64, 00:16:16.639 "state": "configuring", 00:16:16.639 "raid_level": "raid5f", 00:16:16.639 "superblock": true, 00:16:16.639 "num_base_bdevs": 4, 00:16:16.639 "num_base_bdevs_discovered": 2, 00:16:16.639 "num_base_bdevs_operational": 3, 00:16:16.639 "base_bdevs_list": [ 00:16:16.639 { 00:16:16.639 "name": null, 00:16:16.639 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:16.639 "is_configured": false, 00:16:16.639 "data_offset": 2048, 00:16:16.639 "data_size": 63488 00:16:16.639 }, 00:16:16.639 { 00:16:16.639 "name": "pt2", 00:16:16.639 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:16.639 "is_configured": true, 00:16:16.639 "data_offset": 2048, 00:16:16.639 "data_size": 63488 00:16:16.639 }, 00:16:16.639 { 00:16:16.639 "name": "pt3", 00:16:16.639 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:16.639 "is_configured": true, 00:16:16.639 "data_offset": 2048, 00:16:16.639 "data_size": 63488 00:16:16.639 }, 00:16:16.639 { 00:16:16.639 "name": null, 00:16:16.639 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:16.639 "is_configured": false, 00:16:16.639 "data_offset": 2048, 00:16:16.639 "data_size": 63488 00:16:16.639 } 00:16:16.639 ] 00:16:16.639 }' 00:16:16.639 15:24:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:16.639 15:24:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.899 15:24:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:16:16.899 15:24:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.899 15:24:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.899 15:24:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:16:16.899 15:24:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.899 15:24:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:16:16.899 15:24:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:16:16.899 15:24:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.899 15:24:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.899 [2024-11-20 15:24:03.352683] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:16:16.899 [2024-11-20 15:24:03.352766] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:16.899 [2024-11-20 15:24:03.352794] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:16:16.899 [2024-11-20 15:24:03.352807] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:16.899 [2024-11-20 15:24:03.353304] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:16.899 [2024-11-20 15:24:03.353335] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:16:16.899 [2024-11-20 15:24:03.353428] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:16:16.899 [2024-11-20 15:24:03.353452] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:16:16.899 [2024-11-20 15:24:03.353606] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:16:16.899 [2024-11-20 15:24:03.353623] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:16.899 [2024-11-20 15:24:03.353920] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:16:16.899 [2024-11-20 15:24:03.362189] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:16:16.899 [2024-11-20 15:24:03.362233] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:16:16.899 [2024-11-20 15:24:03.362565] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:16.899 pt4 00:16:16.899 15:24:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.899 15:24:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:16.899 15:24:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:16.899 15:24:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:16.899 15:24:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:16.899 15:24:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:16.899 15:24:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:16.899 15:24:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:16.899 15:24:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:16.899 15:24:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:16.899 15:24:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:16.899 15:24:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:16.899 15:24:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:16.899 15:24:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.899 15:24:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.158 15:24:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.158 15:24:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:17.158 "name": "raid_bdev1", 00:16:17.158 "uuid": "e6a99462-d922-46b3-a0d2-44f2e8dca22f", 00:16:17.158 "strip_size_kb": 64, 00:16:17.158 "state": "online", 00:16:17.158 "raid_level": "raid5f", 00:16:17.158 "superblock": true, 00:16:17.158 "num_base_bdevs": 4, 00:16:17.158 "num_base_bdevs_discovered": 3, 00:16:17.158 "num_base_bdevs_operational": 3, 00:16:17.158 "base_bdevs_list": [ 00:16:17.158 { 00:16:17.158 "name": null, 00:16:17.158 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:17.158 "is_configured": false, 00:16:17.158 "data_offset": 2048, 00:16:17.158 "data_size": 63488 00:16:17.158 }, 00:16:17.158 { 00:16:17.158 "name": "pt2", 00:16:17.158 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:17.158 "is_configured": true, 00:16:17.158 "data_offset": 2048, 00:16:17.158 "data_size": 63488 00:16:17.158 }, 00:16:17.158 { 00:16:17.158 "name": "pt3", 00:16:17.158 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:17.158 "is_configured": true, 00:16:17.158 "data_offset": 2048, 00:16:17.158 "data_size": 63488 00:16:17.158 }, 00:16:17.158 { 00:16:17.158 "name": "pt4", 00:16:17.158 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:17.158 "is_configured": true, 00:16:17.158 "data_offset": 2048, 00:16:17.158 "data_size": 63488 00:16:17.158 } 00:16:17.158 ] 00:16:17.158 }' 00:16:17.158 15:24:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:17.158 15:24:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.418 15:24:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:16:17.418 15:24:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.418 15:24:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.418 15:24:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:16:17.418 15:24:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.418 15:24:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:16:17.418 15:24:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:17.418 15:24:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.418 15:24:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.418 15:24:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:16:17.418 [2024-11-20 15:24:03.787609] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:17.418 15:24:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.419 15:24:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' e6a99462-d922-46b3-a0d2-44f2e8dca22f '!=' e6a99462-d922-46b3-a0d2-44f2e8dca22f ']' 00:16:17.419 15:24:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 83922 00:16:17.419 15:24:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 83922 ']' 00:16:17.419 15:24:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # kill -0 83922 00:16:17.419 15:24:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # uname 00:16:17.419 15:24:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:17.419 15:24:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83922 00:16:17.419 killing process with pid 83922 00:16:17.419 15:24:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:17.419 15:24:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:17.419 15:24:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83922' 00:16:17.419 15:24:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@973 -- # kill 83922 00:16:17.419 [2024-11-20 15:24:03.860436] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:17.419 [2024-11-20 15:24:03.860546] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:17.419 15:24:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@978 -- # wait 83922 00:16:17.419 [2024-11-20 15:24:03.860628] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:17.419 [2024-11-20 15:24:03.860645] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:16:18.004 [2024-11-20 15:24:04.263064] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:18.944 15:24:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:16:18.944 00:16:18.944 real 0m8.269s 00:16:18.944 user 0m12.869s 00:16:18.944 sys 0m1.777s 00:16:18.944 15:24:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:18.944 ************************************ 00:16:18.944 END TEST raid5f_superblock_test 00:16:18.944 ************************************ 00:16:18.944 15:24:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.204 15:24:05 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:16:19.204 15:24:05 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 4 false false true 00:16:19.204 15:24:05 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:16:19.204 15:24:05 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:19.204 15:24:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:19.204 ************************************ 00:16:19.204 START TEST raid5f_rebuild_test 00:16:19.204 ************************************ 00:16:19.204 15:24:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 4 false false true 00:16:19.204 15:24:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:16:19.204 15:24:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:16:19.204 15:24:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:16:19.204 15:24:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:16:19.204 15:24:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:16:19.204 15:24:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:16:19.204 15:24:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:19.204 15:24:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:16:19.204 15:24:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:19.204 15:24:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:19.204 15:24:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:16:19.204 15:24:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:19.204 15:24:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:19.204 15:24:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:16:19.204 15:24:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:19.204 15:24:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:19.204 15:24:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:16:19.204 15:24:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:19.204 15:24:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:19.204 15:24:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:19.204 15:24:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:16:19.204 15:24:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:16:19.204 15:24:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:16:19.204 15:24:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:16:19.204 15:24:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:16:19.204 15:24:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:16:19.204 15:24:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:16:19.204 15:24:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:16:19.204 15:24:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:16:19.204 15:24:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:16:19.204 15:24:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:16:19.204 15:24:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=84402 00:16:19.204 15:24:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:16:19.204 15:24:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 84402 00:16:19.204 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:19.204 15:24:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 84402 ']' 00:16:19.204 15:24:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:19.204 15:24:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:19.205 15:24:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:19.205 15:24:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:19.205 15:24:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.205 [2024-11-20 15:24:05.606729] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:16:19.205 [2024-11-20 15:24:05.607064] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:16:19.205 Zero copy mechanism will not be used. 00:16:19.205 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84402 ] 00:16:19.464 [2024-11-20 15:24:05.789143] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:19.464 [2024-11-20 15:24:05.917303] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:19.723 [2024-11-20 15:24:06.133465] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:19.723 [2024-11-20 15:24:06.133707] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:19.982 15:24:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:19.982 15:24:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:16:19.982 15:24:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:19.982 15:24:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:19.982 15:24:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.982 15:24:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.242 BaseBdev1_malloc 00:16:20.242 15:24:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.242 15:24:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:20.242 15:24:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.242 15:24:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.242 [2024-11-20 15:24:06.503599] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:20.242 [2024-11-20 15:24:06.503694] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:20.242 [2024-11-20 15:24:06.503720] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:20.242 [2024-11-20 15:24:06.503735] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:20.242 [2024-11-20 15:24:06.506153] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:20.242 [2024-11-20 15:24:06.506322] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:20.242 BaseBdev1 00:16:20.242 15:24:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.242 15:24:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:20.242 15:24:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:20.242 15:24:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.242 15:24:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.242 BaseBdev2_malloc 00:16:20.242 15:24:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.242 15:24:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:16:20.242 15:24:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.242 15:24:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.242 [2024-11-20 15:24:06.557227] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:16:20.242 [2024-11-20 15:24:06.557503] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:20.242 [2024-11-20 15:24:06.557543] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:20.242 [2024-11-20 15:24:06.557558] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:20.242 [2024-11-20 15:24:06.560146] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:20.242 [2024-11-20 15:24:06.560194] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:20.242 BaseBdev2 00:16:20.242 15:24:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.242 15:24:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:20.242 15:24:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:16:20.242 15:24:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.242 15:24:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.242 BaseBdev3_malloc 00:16:20.242 15:24:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.242 15:24:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:16:20.242 15:24:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.242 15:24:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.242 [2024-11-20 15:24:06.629391] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:16:20.242 [2024-11-20 15:24:06.629468] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:20.242 [2024-11-20 15:24:06.629494] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:20.242 [2024-11-20 15:24:06.629509] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:20.242 [2024-11-20 15:24:06.632217] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:20.242 [2024-11-20 15:24:06.632273] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:16:20.242 BaseBdev3 00:16:20.242 15:24:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.242 15:24:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:20.242 15:24:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:16:20.242 15:24:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.242 15:24:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.242 BaseBdev4_malloc 00:16:20.242 15:24:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.242 15:24:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:16:20.242 15:24:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.242 15:24:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.242 [2024-11-20 15:24:06.686277] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:16:20.242 [2024-11-20 15:24:06.686359] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:20.242 [2024-11-20 15:24:06.686386] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:16:20.242 [2024-11-20 15:24:06.686401] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:20.242 [2024-11-20 15:24:06.688959] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:20.242 [2024-11-20 15:24:06.689009] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:16:20.242 BaseBdev4 00:16:20.242 15:24:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.242 15:24:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:16:20.242 15:24:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.242 15:24:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.502 spare_malloc 00:16:20.502 15:24:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.502 15:24:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:16:20.502 15:24:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.502 15:24:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.502 spare_delay 00:16:20.502 15:24:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.502 15:24:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:20.502 15:24:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.502 15:24:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.502 [2024-11-20 15:24:06.756967] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:20.502 [2024-11-20 15:24:06.757036] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:20.502 [2024-11-20 15:24:06.757059] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:16:20.502 [2024-11-20 15:24:06.757073] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:20.502 [2024-11-20 15:24:06.759572] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:20.502 [2024-11-20 15:24:06.759622] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:20.502 spare 00:16:20.502 15:24:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.502 15:24:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:16:20.502 15:24:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.502 15:24:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.502 [2024-11-20 15:24:06.768992] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:20.502 [2024-11-20 15:24:06.771150] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:20.502 [2024-11-20 15:24:06.771220] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:20.502 [2024-11-20 15:24:06.771276] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:20.502 [2024-11-20 15:24:06.771380] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:20.502 [2024-11-20 15:24:06.771396] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:16:20.502 [2024-11-20 15:24:06.771729] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:16:20.502 [2024-11-20 15:24:06.779894] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:20.502 [2024-11-20 15:24:06.780060] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:20.502 [2024-11-20 15:24:06.780489] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:20.502 15:24:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.502 15:24:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:16:20.502 15:24:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:20.502 15:24:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:20.502 15:24:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:20.502 15:24:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:20.502 15:24:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:20.502 15:24:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:20.502 15:24:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:20.502 15:24:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:20.502 15:24:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:20.502 15:24:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:20.502 15:24:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:20.502 15:24:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.502 15:24:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.502 15:24:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.502 15:24:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:20.502 "name": "raid_bdev1", 00:16:20.502 "uuid": "1063129c-dd3b-4f11-abc1-d8068da9968a", 00:16:20.502 "strip_size_kb": 64, 00:16:20.502 "state": "online", 00:16:20.502 "raid_level": "raid5f", 00:16:20.502 "superblock": false, 00:16:20.502 "num_base_bdevs": 4, 00:16:20.502 "num_base_bdevs_discovered": 4, 00:16:20.502 "num_base_bdevs_operational": 4, 00:16:20.502 "base_bdevs_list": [ 00:16:20.502 { 00:16:20.502 "name": "BaseBdev1", 00:16:20.502 "uuid": "da47d343-d9d2-59d3-a774-83a6264108d8", 00:16:20.502 "is_configured": true, 00:16:20.503 "data_offset": 0, 00:16:20.503 "data_size": 65536 00:16:20.503 }, 00:16:20.503 { 00:16:20.503 "name": "BaseBdev2", 00:16:20.503 "uuid": "287009e0-710c-5f4d-99fb-dc50e100fae5", 00:16:20.503 "is_configured": true, 00:16:20.503 "data_offset": 0, 00:16:20.503 "data_size": 65536 00:16:20.503 }, 00:16:20.503 { 00:16:20.503 "name": "BaseBdev3", 00:16:20.503 "uuid": "45d71ded-7ef1-568e-868f-1956a89a829c", 00:16:20.503 "is_configured": true, 00:16:20.503 "data_offset": 0, 00:16:20.503 "data_size": 65536 00:16:20.503 }, 00:16:20.503 { 00:16:20.503 "name": "BaseBdev4", 00:16:20.503 "uuid": "eee64eb7-128f-547b-94c2-e7d374275a97", 00:16:20.503 "is_configured": true, 00:16:20.503 "data_offset": 0, 00:16:20.503 "data_size": 65536 00:16:20.503 } 00:16:20.503 ] 00:16:20.503 }' 00:16:20.503 15:24:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:20.503 15:24:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.762 15:24:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:16:20.762 15:24:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:20.762 15:24:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.762 15:24:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.762 [2024-11-20 15:24:07.200697] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:20.762 15:24:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.762 15:24:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=196608 00:16:20.762 15:24:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:20.762 15:24:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.762 15:24:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.762 15:24:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:16:21.022 15:24:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.022 15:24:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:16:21.022 15:24:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:16:21.022 15:24:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:16:21.022 15:24:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:16:21.022 15:24:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:16:21.022 15:24:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:21.022 15:24:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:16:21.022 15:24:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:21.022 15:24:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:21.022 15:24:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:21.022 15:24:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:16:21.022 15:24:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:21.022 15:24:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:21.022 15:24:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:16:21.022 [2024-11-20 15:24:07.492103] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:16:21.282 /dev/nbd0 00:16:21.282 15:24:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:21.282 15:24:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:21.282 15:24:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:21.282 15:24:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:16:21.282 15:24:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:21.282 15:24:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:21.282 15:24:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:21.282 15:24:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:16:21.282 15:24:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:21.282 15:24:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:21.282 15:24:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:21.282 1+0 records in 00:16:21.282 1+0 records out 00:16:21.282 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00029546 s, 13.9 MB/s 00:16:21.282 15:24:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:21.282 15:24:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:16:21.282 15:24:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:21.282 15:24:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:21.282 15:24:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:16:21.282 15:24:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:21.282 15:24:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:21.282 15:24:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:16:21.282 15:24:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:16:21.282 15:24:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 192 00:16:21.282 15:24:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=512 oflag=direct 00:16:21.851 512+0 records in 00:16:21.851 512+0 records out 00:16:21.851 100663296 bytes (101 MB, 96 MiB) copied, 0.500877 s, 201 MB/s 00:16:21.851 15:24:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:16:21.851 15:24:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:21.851 15:24:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:21.851 15:24:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:21.851 15:24:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:16:21.851 15:24:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:21.851 15:24:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:21.851 15:24:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:21.851 [2024-11-20 15:24:08.315991] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:21.851 15:24:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:21.851 15:24:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:21.851 15:24:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:21.851 15:24:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:21.851 15:24:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:21.851 15:24:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:16:21.851 15:24:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:16:21.851 15:24:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:16:21.851 15:24:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.851 15:24:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.851 [2024-11-20 15:24:08.330216] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:22.110 15:24:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.110 15:24:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:22.110 15:24:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:22.111 15:24:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:22.111 15:24:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:22.111 15:24:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:22.111 15:24:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:22.111 15:24:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:22.111 15:24:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:22.111 15:24:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:22.111 15:24:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:22.111 15:24:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:22.111 15:24:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.111 15:24:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.111 15:24:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:22.111 15:24:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.111 15:24:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:22.111 "name": "raid_bdev1", 00:16:22.111 "uuid": "1063129c-dd3b-4f11-abc1-d8068da9968a", 00:16:22.111 "strip_size_kb": 64, 00:16:22.111 "state": "online", 00:16:22.111 "raid_level": "raid5f", 00:16:22.111 "superblock": false, 00:16:22.111 "num_base_bdevs": 4, 00:16:22.111 "num_base_bdevs_discovered": 3, 00:16:22.111 "num_base_bdevs_operational": 3, 00:16:22.111 "base_bdevs_list": [ 00:16:22.111 { 00:16:22.111 "name": null, 00:16:22.111 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:22.111 "is_configured": false, 00:16:22.111 "data_offset": 0, 00:16:22.111 "data_size": 65536 00:16:22.111 }, 00:16:22.111 { 00:16:22.111 "name": "BaseBdev2", 00:16:22.111 "uuid": "287009e0-710c-5f4d-99fb-dc50e100fae5", 00:16:22.111 "is_configured": true, 00:16:22.111 "data_offset": 0, 00:16:22.111 "data_size": 65536 00:16:22.111 }, 00:16:22.111 { 00:16:22.111 "name": "BaseBdev3", 00:16:22.111 "uuid": "45d71ded-7ef1-568e-868f-1956a89a829c", 00:16:22.111 "is_configured": true, 00:16:22.111 "data_offset": 0, 00:16:22.111 "data_size": 65536 00:16:22.111 }, 00:16:22.111 { 00:16:22.111 "name": "BaseBdev4", 00:16:22.111 "uuid": "eee64eb7-128f-547b-94c2-e7d374275a97", 00:16:22.111 "is_configured": true, 00:16:22.111 "data_offset": 0, 00:16:22.111 "data_size": 65536 00:16:22.111 } 00:16:22.111 ] 00:16:22.111 }' 00:16:22.111 15:24:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:22.111 15:24:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.370 15:24:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:22.370 15:24:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.370 15:24:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.370 [2024-11-20 15:24:08.773603] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:22.370 [2024-11-20 15:24:08.790451] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:16:22.370 15:24:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.370 15:24:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:16:22.370 [2024-11-20 15:24:08.801068] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:23.748 15:24:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:23.748 15:24:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:23.748 15:24:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:23.748 15:24:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:23.748 15:24:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:23.748 15:24:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:23.748 15:24:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.748 15:24:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.748 15:24:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:23.748 15:24:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.748 15:24:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:23.748 "name": "raid_bdev1", 00:16:23.748 "uuid": "1063129c-dd3b-4f11-abc1-d8068da9968a", 00:16:23.748 "strip_size_kb": 64, 00:16:23.748 "state": "online", 00:16:23.748 "raid_level": "raid5f", 00:16:23.748 "superblock": false, 00:16:23.748 "num_base_bdevs": 4, 00:16:23.748 "num_base_bdevs_discovered": 4, 00:16:23.748 "num_base_bdevs_operational": 4, 00:16:23.748 "process": { 00:16:23.748 "type": "rebuild", 00:16:23.748 "target": "spare", 00:16:23.748 "progress": { 00:16:23.748 "blocks": 17280, 00:16:23.748 "percent": 8 00:16:23.748 } 00:16:23.748 }, 00:16:23.748 "base_bdevs_list": [ 00:16:23.748 { 00:16:23.748 "name": "spare", 00:16:23.748 "uuid": "d794893c-cbe6-55d1-9be7-f90c074237d9", 00:16:23.748 "is_configured": true, 00:16:23.748 "data_offset": 0, 00:16:23.748 "data_size": 65536 00:16:23.748 }, 00:16:23.748 { 00:16:23.748 "name": "BaseBdev2", 00:16:23.748 "uuid": "287009e0-710c-5f4d-99fb-dc50e100fae5", 00:16:23.748 "is_configured": true, 00:16:23.748 "data_offset": 0, 00:16:23.748 "data_size": 65536 00:16:23.748 }, 00:16:23.748 { 00:16:23.748 "name": "BaseBdev3", 00:16:23.748 "uuid": "45d71ded-7ef1-568e-868f-1956a89a829c", 00:16:23.748 "is_configured": true, 00:16:23.748 "data_offset": 0, 00:16:23.748 "data_size": 65536 00:16:23.748 }, 00:16:23.748 { 00:16:23.748 "name": "BaseBdev4", 00:16:23.748 "uuid": "eee64eb7-128f-547b-94c2-e7d374275a97", 00:16:23.748 "is_configured": true, 00:16:23.748 "data_offset": 0, 00:16:23.748 "data_size": 65536 00:16:23.748 } 00:16:23.748 ] 00:16:23.748 }' 00:16:23.748 15:24:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:23.748 15:24:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:23.748 15:24:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:23.748 15:24:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:23.748 15:24:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:23.748 15:24:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.748 15:24:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.748 [2024-11-20 15:24:09.932318] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:23.749 [2024-11-20 15:24:10.010230] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:23.749 [2024-11-20 15:24:10.010607] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:23.749 [2024-11-20 15:24:10.010634] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:23.749 [2024-11-20 15:24:10.010651] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:23.749 15:24:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.749 15:24:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:23.749 15:24:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:23.749 15:24:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:23.749 15:24:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:23.749 15:24:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:23.749 15:24:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:23.749 15:24:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:23.749 15:24:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:23.749 15:24:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:23.749 15:24:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:23.749 15:24:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:23.749 15:24:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.749 15:24:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:23.749 15:24:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.749 15:24:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.749 15:24:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:23.749 "name": "raid_bdev1", 00:16:23.749 "uuid": "1063129c-dd3b-4f11-abc1-d8068da9968a", 00:16:23.749 "strip_size_kb": 64, 00:16:23.749 "state": "online", 00:16:23.749 "raid_level": "raid5f", 00:16:23.749 "superblock": false, 00:16:23.749 "num_base_bdevs": 4, 00:16:23.749 "num_base_bdevs_discovered": 3, 00:16:23.749 "num_base_bdevs_operational": 3, 00:16:23.749 "base_bdevs_list": [ 00:16:23.749 { 00:16:23.749 "name": null, 00:16:23.749 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:23.749 "is_configured": false, 00:16:23.749 "data_offset": 0, 00:16:23.749 "data_size": 65536 00:16:23.749 }, 00:16:23.749 { 00:16:23.749 "name": "BaseBdev2", 00:16:23.749 "uuid": "287009e0-710c-5f4d-99fb-dc50e100fae5", 00:16:23.749 "is_configured": true, 00:16:23.749 "data_offset": 0, 00:16:23.749 "data_size": 65536 00:16:23.749 }, 00:16:23.749 { 00:16:23.749 "name": "BaseBdev3", 00:16:23.749 "uuid": "45d71ded-7ef1-568e-868f-1956a89a829c", 00:16:23.749 "is_configured": true, 00:16:23.749 "data_offset": 0, 00:16:23.749 "data_size": 65536 00:16:23.749 }, 00:16:23.749 { 00:16:23.749 "name": "BaseBdev4", 00:16:23.749 "uuid": "eee64eb7-128f-547b-94c2-e7d374275a97", 00:16:23.749 "is_configured": true, 00:16:23.749 "data_offset": 0, 00:16:23.749 "data_size": 65536 00:16:23.749 } 00:16:23.749 ] 00:16:23.749 }' 00:16:23.749 15:24:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:23.749 15:24:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.008 15:24:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:24.008 15:24:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:24.008 15:24:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:24.008 15:24:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:24.008 15:24:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:24.008 15:24:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:24.008 15:24:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:24.008 15:24:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.008 15:24:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.008 15:24:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.267 15:24:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:24.267 "name": "raid_bdev1", 00:16:24.267 "uuid": "1063129c-dd3b-4f11-abc1-d8068da9968a", 00:16:24.267 "strip_size_kb": 64, 00:16:24.267 "state": "online", 00:16:24.267 "raid_level": "raid5f", 00:16:24.267 "superblock": false, 00:16:24.267 "num_base_bdevs": 4, 00:16:24.267 "num_base_bdevs_discovered": 3, 00:16:24.267 "num_base_bdevs_operational": 3, 00:16:24.267 "base_bdevs_list": [ 00:16:24.267 { 00:16:24.267 "name": null, 00:16:24.267 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:24.267 "is_configured": false, 00:16:24.267 "data_offset": 0, 00:16:24.267 "data_size": 65536 00:16:24.267 }, 00:16:24.267 { 00:16:24.267 "name": "BaseBdev2", 00:16:24.267 "uuid": "287009e0-710c-5f4d-99fb-dc50e100fae5", 00:16:24.267 "is_configured": true, 00:16:24.267 "data_offset": 0, 00:16:24.267 "data_size": 65536 00:16:24.267 }, 00:16:24.267 { 00:16:24.267 "name": "BaseBdev3", 00:16:24.267 "uuid": "45d71ded-7ef1-568e-868f-1956a89a829c", 00:16:24.267 "is_configured": true, 00:16:24.267 "data_offset": 0, 00:16:24.267 "data_size": 65536 00:16:24.267 }, 00:16:24.267 { 00:16:24.267 "name": "BaseBdev4", 00:16:24.267 "uuid": "eee64eb7-128f-547b-94c2-e7d374275a97", 00:16:24.267 "is_configured": true, 00:16:24.267 "data_offset": 0, 00:16:24.267 "data_size": 65536 00:16:24.267 } 00:16:24.268 ] 00:16:24.268 }' 00:16:24.268 15:24:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:24.268 15:24:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:24.268 15:24:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:24.268 15:24:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:24.268 15:24:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:24.268 15:24:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.268 15:24:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.268 [2024-11-20 15:24:10.604355] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:24.268 [2024-11-20 15:24:10.620147] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b820 00:16:24.268 15:24:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.268 15:24:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:16:24.268 [2024-11-20 15:24:10.630650] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:25.204 15:24:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:25.204 15:24:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:25.204 15:24:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:25.204 15:24:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:25.204 15:24:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:25.204 15:24:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:25.204 15:24:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:25.204 15:24:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.204 15:24:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.204 15:24:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.204 15:24:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:25.204 "name": "raid_bdev1", 00:16:25.204 "uuid": "1063129c-dd3b-4f11-abc1-d8068da9968a", 00:16:25.204 "strip_size_kb": 64, 00:16:25.204 "state": "online", 00:16:25.204 "raid_level": "raid5f", 00:16:25.204 "superblock": false, 00:16:25.204 "num_base_bdevs": 4, 00:16:25.204 "num_base_bdevs_discovered": 4, 00:16:25.204 "num_base_bdevs_operational": 4, 00:16:25.204 "process": { 00:16:25.204 "type": "rebuild", 00:16:25.204 "target": "spare", 00:16:25.204 "progress": { 00:16:25.204 "blocks": 19200, 00:16:25.204 "percent": 9 00:16:25.204 } 00:16:25.204 }, 00:16:25.204 "base_bdevs_list": [ 00:16:25.204 { 00:16:25.204 "name": "spare", 00:16:25.204 "uuid": "d794893c-cbe6-55d1-9be7-f90c074237d9", 00:16:25.204 "is_configured": true, 00:16:25.204 "data_offset": 0, 00:16:25.204 "data_size": 65536 00:16:25.204 }, 00:16:25.204 { 00:16:25.204 "name": "BaseBdev2", 00:16:25.204 "uuid": "287009e0-710c-5f4d-99fb-dc50e100fae5", 00:16:25.204 "is_configured": true, 00:16:25.204 "data_offset": 0, 00:16:25.204 "data_size": 65536 00:16:25.204 }, 00:16:25.204 { 00:16:25.204 "name": "BaseBdev3", 00:16:25.204 "uuid": "45d71ded-7ef1-568e-868f-1956a89a829c", 00:16:25.204 "is_configured": true, 00:16:25.204 "data_offset": 0, 00:16:25.204 "data_size": 65536 00:16:25.204 }, 00:16:25.204 { 00:16:25.204 "name": "BaseBdev4", 00:16:25.204 "uuid": "eee64eb7-128f-547b-94c2-e7d374275a97", 00:16:25.204 "is_configured": true, 00:16:25.204 "data_offset": 0, 00:16:25.204 "data_size": 65536 00:16:25.204 } 00:16:25.204 ] 00:16:25.204 }' 00:16:25.204 15:24:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:25.463 15:24:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:25.463 15:24:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:25.463 15:24:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:25.463 15:24:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:16:25.463 15:24:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:16:25.463 15:24:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:16:25.463 15:24:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=615 00:16:25.463 15:24:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:25.463 15:24:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:25.463 15:24:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:25.463 15:24:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:25.463 15:24:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:25.463 15:24:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:25.463 15:24:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:25.463 15:24:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:25.463 15:24:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.463 15:24:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.463 15:24:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.463 15:24:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:25.463 "name": "raid_bdev1", 00:16:25.463 "uuid": "1063129c-dd3b-4f11-abc1-d8068da9968a", 00:16:25.463 "strip_size_kb": 64, 00:16:25.463 "state": "online", 00:16:25.463 "raid_level": "raid5f", 00:16:25.463 "superblock": false, 00:16:25.463 "num_base_bdevs": 4, 00:16:25.463 "num_base_bdevs_discovered": 4, 00:16:25.463 "num_base_bdevs_operational": 4, 00:16:25.463 "process": { 00:16:25.463 "type": "rebuild", 00:16:25.463 "target": "spare", 00:16:25.463 "progress": { 00:16:25.463 "blocks": 21120, 00:16:25.463 "percent": 10 00:16:25.463 } 00:16:25.463 }, 00:16:25.463 "base_bdevs_list": [ 00:16:25.463 { 00:16:25.463 "name": "spare", 00:16:25.463 "uuid": "d794893c-cbe6-55d1-9be7-f90c074237d9", 00:16:25.463 "is_configured": true, 00:16:25.463 "data_offset": 0, 00:16:25.463 "data_size": 65536 00:16:25.463 }, 00:16:25.463 { 00:16:25.463 "name": "BaseBdev2", 00:16:25.463 "uuid": "287009e0-710c-5f4d-99fb-dc50e100fae5", 00:16:25.463 "is_configured": true, 00:16:25.463 "data_offset": 0, 00:16:25.463 "data_size": 65536 00:16:25.463 }, 00:16:25.463 { 00:16:25.463 "name": "BaseBdev3", 00:16:25.463 "uuid": "45d71ded-7ef1-568e-868f-1956a89a829c", 00:16:25.463 "is_configured": true, 00:16:25.463 "data_offset": 0, 00:16:25.463 "data_size": 65536 00:16:25.463 }, 00:16:25.463 { 00:16:25.463 "name": "BaseBdev4", 00:16:25.463 "uuid": "eee64eb7-128f-547b-94c2-e7d374275a97", 00:16:25.463 "is_configured": true, 00:16:25.463 "data_offset": 0, 00:16:25.463 "data_size": 65536 00:16:25.463 } 00:16:25.463 ] 00:16:25.463 }' 00:16:25.463 15:24:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:25.463 15:24:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:25.463 15:24:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:25.463 15:24:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:25.463 15:24:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:26.840 15:24:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:26.840 15:24:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:26.840 15:24:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:26.840 15:24:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:26.840 15:24:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:26.840 15:24:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:26.840 15:24:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:26.840 15:24:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:26.840 15:24:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.840 15:24:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.840 15:24:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.840 15:24:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:26.840 "name": "raid_bdev1", 00:16:26.840 "uuid": "1063129c-dd3b-4f11-abc1-d8068da9968a", 00:16:26.840 "strip_size_kb": 64, 00:16:26.840 "state": "online", 00:16:26.840 "raid_level": "raid5f", 00:16:26.840 "superblock": false, 00:16:26.840 "num_base_bdevs": 4, 00:16:26.840 "num_base_bdevs_discovered": 4, 00:16:26.840 "num_base_bdevs_operational": 4, 00:16:26.840 "process": { 00:16:26.840 "type": "rebuild", 00:16:26.840 "target": "spare", 00:16:26.841 "progress": { 00:16:26.841 "blocks": 42240, 00:16:26.841 "percent": 21 00:16:26.841 } 00:16:26.841 }, 00:16:26.841 "base_bdevs_list": [ 00:16:26.841 { 00:16:26.841 "name": "spare", 00:16:26.841 "uuid": "d794893c-cbe6-55d1-9be7-f90c074237d9", 00:16:26.841 "is_configured": true, 00:16:26.841 "data_offset": 0, 00:16:26.841 "data_size": 65536 00:16:26.841 }, 00:16:26.841 { 00:16:26.841 "name": "BaseBdev2", 00:16:26.841 "uuid": "287009e0-710c-5f4d-99fb-dc50e100fae5", 00:16:26.841 "is_configured": true, 00:16:26.841 "data_offset": 0, 00:16:26.841 "data_size": 65536 00:16:26.841 }, 00:16:26.841 { 00:16:26.841 "name": "BaseBdev3", 00:16:26.841 "uuid": "45d71ded-7ef1-568e-868f-1956a89a829c", 00:16:26.841 "is_configured": true, 00:16:26.841 "data_offset": 0, 00:16:26.841 "data_size": 65536 00:16:26.841 }, 00:16:26.841 { 00:16:26.841 "name": "BaseBdev4", 00:16:26.841 "uuid": "eee64eb7-128f-547b-94c2-e7d374275a97", 00:16:26.841 "is_configured": true, 00:16:26.841 "data_offset": 0, 00:16:26.841 "data_size": 65536 00:16:26.841 } 00:16:26.841 ] 00:16:26.841 }' 00:16:26.841 15:24:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:26.841 15:24:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:26.841 15:24:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:26.841 15:24:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:26.841 15:24:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:27.777 15:24:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:27.777 15:24:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:27.777 15:24:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:27.777 15:24:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:27.777 15:24:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:27.777 15:24:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:27.777 15:24:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:27.777 15:24:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.777 15:24:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:27.777 15:24:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.777 15:24:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.777 15:24:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:27.777 "name": "raid_bdev1", 00:16:27.777 "uuid": "1063129c-dd3b-4f11-abc1-d8068da9968a", 00:16:27.777 "strip_size_kb": 64, 00:16:27.777 "state": "online", 00:16:27.777 "raid_level": "raid5f", 00:16:27.777 "superblock": false, 00:16:27.777 "num_base_bdevs": 4, 00:16:27.777 "num_base_bdevs_discovered": 4, 00:16:27.777 "num_base_bdevs_operational": 4, 00:16:27.777 "process": { 00:16:27.777 "type": "rebuild", 00:16:27.777 "target": "spare", 00:16:27.777 "progress": { 00:16:27.777 "blocks": 63360, 00:16:27.777 "percent": 32 00:16:27.777 } 00:16:27.777 }, 00:16:27.777 "base_bdevs_list": [ 00:16:27.777 { 00:16:27.777 "name": "spare", 00:16:27.777 "uuid": "d794893c-cbe6-55d1-9be7-f90c074237d9", 00:16:27.777 "is_configured": true, 00:16:27.777 "data_offset": 0, 00:16:27.777 "data_size": 65536 00:16:27.777 }, 00:16:27.777 { 00:16:27.777 "name": "BaseBdev2", 00:16:27.777 "uuid": "287009e0-710c-5f4d-99fb-dc50e100fae5", 00:16:27.777 "is_configured": true, 00:16:27.777 "data_offset": 0, 00:16:27.777 "data_size": 65536 00:16:27.777 }, 00:16:27.777 { 00:16:27.777 "name": "BaseBdev3", 00:16:27.777 "uuid": "45d71ded-7ef1-568e-868f-1956a89a829c", 00:16:27.777 "is_configured": true, 00:16:27.777 "data_offset": 0, 00:16:27.777 "data_size": 65536 00:16:27.777 }, 00:16:27.777 { 00:16:27.777 "name": "BaseBdev4", 00:16:27.777 "uuid": "eee64eb7-128f-547b-94c2-e7d374275a97", 00:16:27.777 "is_configured": true, 00:16:27.777 "data_offset": 0, 00:16:27.777 "data_size": 65536 00:16:27.777 } 00:16:27.777 ] 00:16:27.777 }' 00:16:27.777 15:24:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:27.777 15:24:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:27.777 15:24:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:27.777 15:24:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:27.777 15:24:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:28.713 15:24:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:28.713 15:24:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:28.713 15:24:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:28.713 15:24:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:28.713 15:24:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:28.713 15:24:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:28.713 15:24:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:28.713 15:24:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.713 15:24:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:28.713 15:24:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.973 15:24:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.973 15:24:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:28.973 "name": "raid_bdev1", 00:16:28.973 "uuid": "1063129c-dd3b-4f11-abc1-d8068da9968a", 00:16:28.973 "strip_size_kb": 64, 00:16:28.973 "state": "online", 00:16:28.973 "raid_level": "raid5f", 00:16:28.973 "superblock": false, 00:16:28.973 "num_base_bdevs": 4, 00:16:28.973 "num_base_bdevs_discovered": 4, 00:16:28.973 "num_base_bdevs_operational": 4, 00:16:28.973 "process": { 00:16:28.973 "type": "rebuild", 00:16:28.973 "target": "spare", 00:16:28.973 "progress": { 00:16:28.973 "blocks": 86400, 00:16:28.973 "percent": 43 00:16:28.973 } 00:16:28.973 }, 00:16:28.973 "base_bdevs_list": [ 00:16:28.973 { 00:16:28.973 "name": "spare", 00:16:28.973 "uuid": "d794893c-cbe6-55d1-9be7-f90c074237d9", 00:16:28.973 "is_configured": true, 00:16:28.973 "data_offset": 0, 00:16:28.973 "data_size": 65536 00:16:28.973 }, 00:16:28.973 { 00:16:28.973 "name": "BaseBdev2", 00:16:28.973 "uuid": "287009e0-710c-5f4d-99fb-dc50e100fae5", 00:16:28.973 "is_configured": true, 00:16:28.973 "data_offset": 0, 00:16:28.973 "data_size": 65536 00:16:28.974 }, 00:16:28.974 { 00:16:28.974 "name": "BaseBdev3", 00:16:28.974 "uuid": "45d71ded-7ef1-568e-868f-1956a89a829c", 00:16:28.974 "is_configured": true, 00:16:28.974 "data_offset": 0, 00:16:28.974 "data_size": 65536 00:16:28.974 }, 00:16:28.974 { 00:16:28.974 "name": "BaseBdev4", 00:16:28.974 "uuid": "eee64eb7-128f-547b-94c2-e7d374275a97", 00:16:28.974 "is_configured": true, 00:16:28.974 "data_offset": 0, 00:16:28.974 "data_size": 65536 00:16:28.974 } 00:16:28.974 ] 00:16:28.974 }' 00:16:28.974 15:24:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:28.974 15:24:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:28.974 15:24:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:28.974 15:24:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:28.974 15:24:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:29.912 15:24:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:29.912 15:24:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:29.912 15:24:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:29.912 15:24:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:29.912 15:24:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:29.912 15:24:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:29.912 15:24:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:29.912 15:24:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:29.912 15:24:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.912 15:24:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.912 15:24:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.912 15:24:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:29.912 "name": "raid_bdev1", 00:16:29.912 "uuid": "1063129c-dd3b-4f11-abc1-d8068da9968a", 00:16:29.912 "strip_size_kb": 64, 00:16:29.912 "state": "online", 00:16:29.912 "raid_level": "raid5f", 00:16:29.912 "superblock": false, 00:16:29.912 "num_base_bdevs": 4, 00:16:29.912 "num_base_bdevs_discovered": 4, 00:16:29.912 "num_base_bdevs_operational": 4, 00:16:29.912 "process": { 00:16:29.912 "type": "rebuild", 00:16:29.912 "target": "spare", 00:16:29.912 "progress": { 00:16:29.912 "blocks": 107520, 00:16:29.912 "percent": 54 00:16:29.912 } 00:16:29.912 }, 00:16:29.912 "base_bdevs_list": [ 00:16:29.912 { 00:16:29.912 "name": "spare", 00:16:29.912 "uuid": "d794893c-cbe6-55d1-9be7-f90c074237d9", 00:16:29.912 "is_configured": true, 00:16:29.912 "data_offset": 0, 00:16:29.912 "data_size": 65536 00:16:29.912 }, 00:16:29.912 { 00:16:29.912 "name": "BaseBdev2", 00:16:29.912 "uuid": "287009e0-710c-5f4d-99fb-dc50e100fae5", 00:16:29.912 "is_configured": true, 00:16:29.912 "data_offset": 0, 00:16:29.912 "data_size": 65536 00:16:29.912 }, 00:16:29.912 { 00:16:29.912 "name": "BaseBdev3", 00:16:29.912 "uuid": "45d71ded-7ef1-568e-868f-1956a89a829c", 00:16:29.912 "is_configured": true, 00:16:29.912 "data_offset": 0, 00:16:29.912 "data_size": 65536 00:16:29.912 }, 00:16:29.912 { 00:16:29.912 "name": "BaseBdev4", 00:16:29.912 "uuid": "eee64eb7-128f-547b-94c2-e7d374275a97", 00:16:29.912 "is_configured": true, 00:16:29.912 "data_offset": 0, 00:16:29.912 "data_size": 65536 00:16:29.912 } 00:16:29.912 ] 00:16:29.912 }' 00:16:29.912 15:24:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:30.171 15:24:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:30.171 15:24:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:30.171 15:24:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:30.171 15:24:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:31.108 15:24:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:31.108 15:24:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:31.108 15:24:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:31.108 15:24:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:31.108 15:24:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:31.108 15:24:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:31.108 15:24:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:31.108 15:24:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.108 15:24:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.108 15:24:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:31.108 15:24:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.108 15:24:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:31.108 "name": "raid_bdev1", 00:16:31.108 "uuid": "1063129c-dd3b-4f11-abc1-d8068da9968a", 00:16:31.108 "strip_size_kb": 64, 00:16:31.108 "state": "online", 00:16:31.108 "raid_level": "raid5f", 00:16:31.108 "superblock": false, 00:16:31.108 "num_base_bdevs": 4, 00:16:31.108 "num_base_bdevs_discovered": 4, 00:16:31.108 "num_base_bdevs_operational": 4, 00:16:31.108 "process": { 00:16:31.108 "type": "rebuild", 00:16:31.108 "target": "spare", 00:16:31.108 "progress": { 00:16:31.108 "blocks": 128640, 00:16:31.108 "percent": 65 00:16:31.108 } 00:16:31.108 }, 00:16:31.108 "base_bdevs_list": [ 00:16:31.108 { 00:16:31.108 "name": "spare", 00:16:31.108 "uuid": "d794893c-cbe6-55d1-9be7-f90c074237d9", 00:16:31.108 "is_configured": true, 00:16:31.108 "data_offset": 0, 00:16:31.108 "data_size": 65536 00:16:31.108 }, 00:16:31.108 { 00:16:31.108 "name": "BaseBdev2", 00:16:31.108 "uuid": "287009e0-710c-5f4d-99fb-dc50e100fae5", 00:16:31.108 "is_configured": true, 00:16:31.108 "data_offset": 0, 00:16:31.108 "data_size": 65536 00:16:31.108 }, 00:16:31.108 { 00:16:31.108 "name": "BaseBdev3", 00:16:31.108 "uuid": "45d71ded-7ef1-568e-868f-1956a89a829c", 00:16:31.108 "is_configured": true, 00:16:31.108 "data_offset": 0, 00:16:31.108 "data_size": 65536 00:16:31.108 }, 00:16:31.108 { 00:16:31.108 "name": "BaseBdev4", 00:16:31.108 "uuid": "eee64eb7-128f-547b-94c2-e7d374275a97", 00:16:31.108 "is_configured": true, 00:16:31.108 "data_offset": 0, 00:16:31.108 "data_size": 65536 00:16:31.108 } 00:16:31.108 ] 00:16:31.108 }' 00:16:31.108 15:24:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:31.108 15:24:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:31.108 15:24:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:31.366 15:24:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:31.366 15:24:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:32.304 15:24:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:32.304 15:24:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:32.304 15:24:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:32.304 15:24:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:32.304 15:24:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:32.304 15:24:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:32.304 15:24:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:32.304 15:24:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.304 15:24:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:32.304 15:24:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.304 15:24:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.304 15:24:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:32.304 "name": "raid_bdev1", 00:16:32.304 "uuid": "1063129c-dd3b-4f11-abc1-d8068da9968a", 00:16:32.304 "strip_size_kb": 64, 00:16:32.304 "state": "online", 00:16:32.304 "raid_level": "raid5f", 00:16:32.304 "superblock": false, 00:16:32.304 "num_base_bdevs": 4, 00:16:32.304 "num_base_bdevs_discovered": 4, 00:16:32.304 "num_base_bdevs_operational": 4, 00:16:32.304 "process": { 00:16:32.304 "type": "rebuild", 00:16:32.304 "target": "spare", 00:16:32.304 "progress": { 00:16:32.304 "blocks": 151680, 00:16:32.304 "percent": 77 00:16:32.304 } 00:16:32.304 }, 00:16:32.304 "base_bdevs_list": [ 00:16:32.304 { 00:16:32.304 "name": "spare", 00:16:32.304 "uuid": "d794893c-cbe6-55d1-9be7-f90c074237d9", 00:16:32.304 "is_configured": true, 00:16:32.304 "data_offset": 0, 00:16:32.304 "data_size": 65536 00:16:32.304 }, 00:16:32.304 { 00:16:32.304 "name": "BaseBdev2", 00:16:32.304 "uuid": "287009e0-710c-5f4d-99fb-dc50e100fae5", 00:16:32.304 "is_configured": true, 00:16:32.304 "data_offset": 0, 00:16:32.304 "data_size": 65536 00:16:32.304 }, 00:16:32.304 { 00:16:32.304 "name": "BaseBdev3", 00:16:32.304 "uuid": "45d71ded-7ef1-568e-868f-1956a89a829c", 00:16:32.304 "is_configured": true, 00:16:32.304 "data_offset": 0, 00:16:32.304 "data_size": 65536 00:16:32.304 }, 00:16:32.304 { 00:16:32.304 "name": "BaseBdev4", 00:16:32.304 "uuid": "eee64eb7-128f-547b-94c2-e7d374275a97", 00:16:32.304 "is_configured": true, 00:16:32.304 "data_offset": 0, 00:16:32.304 "data_size": 65536 00:16:32.304 } 00:16:32.304 ] 00:16:32.305 }' 00:16:32.305 15:24:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:32.305 15:24:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:32.305 15:24:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:32.305 15:24:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:32.305 15:24:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:33.683 15:24:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:33.683 15:24:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:33.683 15:24:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:33.683 15:24:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:33.683 15:24:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:33.683 15:24:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:33.683 15:24:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:33.683 15:24:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:33.683 15:24:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.683 15:24:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.683 15:24:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.683 15:24:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:33.683 "name": "raid_bdev1", 00:16:33.683 "uuid": "1063129c-dd3b-4f11-abc1-d8068da9968a", 00:16:33.683 "strip_size_kb": 64, 00:16:33.683 "state": "online", 00:16:33.683 "raid_level": "raid5f", 00:16:33.683 "superblock": false, 00:16:33.683 "num_base_bdevs": 4, 00:16:33.683 "num_base_bdevs_discovered": 4, 00:16:33.683 "num_base_bdevs_operational": 4, 00:16:33.683 "process": { 00:16:33.683 "type": "rebuild", 00:16:33.683 "target": "spare", 00:16:33.683 "progress": { 00:16:33.683 "blocks": 172800, 00:16:33.683 "percent": 87 00:16:33.683 } 00:16:33.683 }, 00:16:33.684 "base_bdevs_list": [ 00:16:33.684 { 00:16:33.684 "name": "spare", 00:16:33.684 "uuid": "d794893c-cbe6-55d1-9be7-f90c074237d9", 00:16:33.684 "is_configured": true, 00:16:33.684 "data_offset": 0, 00:16:33.684 "data_size": 65536 00:16:33.684 }, 00:16:33.684 { 00:16:33.684 "name": "BaseBdev2", 00:16:33.684 "uuid": "287009e0-710c-5f4d-99fb-dc50e100fae5", 00:16:33.684 "is_configured": true, 00:16:33.684 "data_offset": 0, 00:16:33.684 "data_size": 65536 00:16:33.684 }, 00:16:33.684 { 00:16:33.684 "name": "BaseBdev3", 00:16:33.684 "uuid": "45d71ded-7ef1-568e-868f-1956a89a829c", 00:16:33.684 "is_configured": true, 00:16:33.684 "data_offset": 0, 00:16:33.684 "data_size": 65536 00:16:33.684 }, 00:16:33.684 { 00:16:33.684 "name": "BaseBdev4", 00:16:33.684 "uuid": "eee64eb7-128f-547b-94c2-e7d374275a97", 00:16:33.684 "is_configured": true, 00:16:33.684 "data_offset": 0, 00:16:33.684 "data_size": 65536 00:16:33.684 } 00:16:33.684 ] 00:16:33.684 }' 00:16:33.684 15:24:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:33.684 15:24:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:33.684 15:24:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:33.684 15:24:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:33.684 15:24:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:34.619 15:24:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:34.619 15:24:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:34.619 15:24:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:34.619 15:24:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:34.619 15:24:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:34.619 15:24:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:34.619 15:24:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:34.619 15:24:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:34.619 15:24:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.619 15:24:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:34.619 15:24:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.619 15:24:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:34.619 "name": "raid_bdev1", 00:16:34.619 "uuid": "1063129c-dd3b-4f11-abc1-d8068da9968a", 00:16:34.619 "strip_size_kb": 64, 00:16:34.619 "state": "online", 00:16:34.619 "raid_level": "raid5f", 00:16:34.619 "superblock": false, 00:16:34.619 "num_base_bdevs": 4, 00:16:34.619 "num_base_bdevs_discovered": 4, 00:16:34.619 "num_base_bdevs_operational": 4, 00:16:34.619 "process": { 00:16:34.619 "type": "rebuild", 00:16:34.619 "target": "spare", 00:16:34.619 "progress": { 00:16:34.619 "blocks": 193920, 00:16:34.619 "percent": 98 00:16:34.619 } 00:16:34.619 }, 00:16:34.619 "base_bdevs_list": [ 00:16:34.619 { 00:16:34.619 "name": "spare", 00:16:34.619 "uuid": "d794893c-cbe6-55d1-9be7-f90c074237d9", 00:16:34.619 "is_configured": true, 00:16:34.619 "data_offset": 0, 00:16:34.619 "data_size": 65536 00:16:34.619 }, 00:16:34.619 { 00:16:34.619 "name": "BaseBdev2", 00:16:34.619 "uuid": "287009e0-710c-5f4d-99fb-dc50e100fae5", 00:16:34.619 "is_configured": true, 00:16:34.619 "data_offset": 0, 00:16:34.619 "data_size": 65536 00:16:34.619 }, 00:16:34.619 { 00:16:34.619 "name": "BaseBdev3", 00:16:34.619 "uuid": "45d71ded-7ef1-568e-868f-1956a89a829c", 00:16:34.619 "is_configured": true, 00:16:34.619 "data_offset": 0, 00:16:34.619 "data_size": 65536 00:16:34.619 }, 00:16:34.619 { 00:16:34.619 "name": "BaseBdev4", 00:16:34.619 "uuid": "eee64eb7-128f-547b-94c2-e7d374275a97", 00:16:34.619 "is_configured": true, 00:16:34.619 "data_offset": 0, 00:16:34.619 "data_size": 65536 00:16:34.619 } 00:16:34.619 ] 00:16:34.619 }' 00:16:34.619 15:24:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:34.619 15:24:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:34.619 15:24:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:34.619 15:24:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:34.619 15:24:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:34.619 [2024-11-20 15:24:21.002827] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:16:34.619 [2024-11-20 15:24:21.003096] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:16:34.619 [2024-11-20 15:24:21.003284] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:35.556 15:24:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:35.556 15:24:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:35.556 15:24:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:35.556 15:24:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:35.556 15:24:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:35.556 15:24:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:35.556 15:24:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:35.556 15:24:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.556 15:24:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:35.556 15:24:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:35.815 15:24:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.815 15:24:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:35.815 "name": "raid_bdev1", 00:16:35.815 "uuid": "1063129c-dd3b-4f11-abc1-d8068da9968a", 00:16:35.815 "strip_size_kb": 64, 00:16:35.815 "state": "online", 00:16:35.815 "raid_level": "raid5f", 00:16:35.815 "superblock": false, 00:16:35.815 "num_base_bdevs": 4, 00:16:35.815 "num_base_bdevs_discovered": 4, 00:16:35.815 "num_base_bdevs_operational": 4, 00:16:35.815 "base_bdevs_list": [ 00:16:35.815 { 00:16:35.815 "name": "spare", 00:16:35.815 "uuid": "d794893c-cbe6-55d1-9be7-f90c074237d9", 00:16:35.815 "is_configured": true, 00:16:35.815 "data_offset": 0, 00:16:35.815 "data_size": 65536 00:16:35.815 }, 00:16:35.815 { 00:16:35.815 "name": "BaseBdev2", 00:16:35.815 "uuid": "287009e0-710c-5f4d-99fb-dc50e100fae5", 00:16:35.815 "is_configured": true, 00:16:35.815 "data_offset": 0, 00:16:35.815 "data_size": 65536 00:16:35.815 }, 00:16:35.815 { 00:16:35.815 "name": "BaseBdev3", 00:16:35.815 "uuid": "45d71ded-7ef1-568e-868f-1956a89a829c", 00:16:35.815 "is_configured": true, 00:16:35.815 "data_offset": 0, 00:16:35.815 "data_size": 65536 00:16:35.815 }, 00:16:35.815 { 00:16:35.815 "name": "BaseBdev4", 00:16:35.815 "uuid": "eee64eb7-128f-547b-94c2-e7d374275a97", 00:16:35.815 "is_configured": true, 00:16:35.815 "data_offset": 0, 00:16:35.815 "data_size": 65536 00:16:35.815 } 00:16:35.815 ] 00:16:35.815 }' 00:16:35.815 15:24:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:35.815 15:24:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:16:35.815 15:24:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:35.815 15:24:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:16:35.815 15:24:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:16:35.815 15:24:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:35.815 15:24:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:35.815 15:24:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:35.815 15:24:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:35.815 15:24:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:35.815 15:24:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:35.815 15:24:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:35.815 15:24:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.815 15:24:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:35.815 15:24:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.815 15:24:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:35.815 "name": "raid_bdev1", 00:16:35.815 "uuid": "1063129c-dd3b-4f11-abc1-d8068da9968a", 00:16:35.815 "strip_size_kb": 64, 00:16:35.815 "state": "online", 00:16:35.815 "raid_level": "raid5f", 00:16:35.815 "superblock": false, 00:16:35.815 "num_base_bdevs": 4, 00:16:35.815 "num_base_bdevs_discovered": 4, 00:16:35.815 "num_base_bdevs_operational": 4, 00:16:35.815 "base_bdevs_list": [ 00:16:35.815 { 00:16:35.815 "name": "spare", 00:16:35.815 "uuid": "d794893c-cbe6-55d1-9be7-f90c074237d9", 00:16:35.815 "is_configured": true, 00:16:35.816 "data_offset": 0, 00:16:35.816 "data_size": 65536 00:16:35.816 }, 00:16:35.816 { 00:16:35.816 "name": "BaseBdev2", 00:16:35.816 "uuid": "287009e0-710c-5f4d-99fb-dc50e100fae5", 00:16:35.816 "is_configured": true, 00:16:35.816 "data_offset": 0, 00:16:35.816 "data_size": 65536 00:16:35.816 }, 00:16:35.816 { 00:16:35.816 "name": "BaseBdev3", 00:16:35.816 "uuid": "45d71ded-7ef1-568e-868f-1956a89a829c", 00:16:35.816 "is_configured": true, 00:16:35.816 "data_offset": 0, 00:16:35.816 "data_size": 65536 00:16:35.816 }, 00:16:35.816 { 00:16:35.816 "name": "BaseBdev4", 00:16:35.816 "uuid": "eee64eb7-128f-547b-94c2-e7d374275a97", 00:16:35.816 "is_configured": true, 00:16:35.816 "data_offset": 0, 00:16:35.816 "data_size": 65536 00:16:35.816 } 00:16:35.816 ] 00:16:35.816 }' 00:16:35.816 15:24:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:35.816 15:24:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:35.816 15:24:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:35.816 15:24:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:35.816 15:24:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:16:35.816 15:24:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:35.816 15:24:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:35.816 15:24:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:35.816 15:24:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:35.816 15:24:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:35.816 15:24:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:35.816 15:24:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:35.816 15:24:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:35.816 15:24:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:35.816 15:24:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:35.816 15:24:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:35.816 15:24:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.816 15:24:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:36.075 15:24:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.075 15:24:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:36.075 "name": "raid_bdev1", 00:16:36.075 "uuid": "1063129c-dd3b-4f11-abc1-d8068da9968a", 00:16:36.075 "strip_size_kb": 64, 00:16:36.075 "state": "online", 00:16:36.075 "raid_level": "raid5f", 00:16:36.075 "superblock": false, 00:16:36.075 "num_base_bdevs": 4, 00:16:36.075 "num_base_bdevs_discovered": 4, 00:16:36.075 "num_base_bdevs_operational": 4, 00:16:36.075 "base_bdevs_list": [ 00:16:36.075 { 00:16:36.075 "name": "spare", 00:16:36.075 "uuid": "d794893c-cbe6-55d1-9be7-f90c074237d9", 00:16:36.075 "is_configured": true, 00:16:36.075 "data_offset": 0, 00:16:36.075 "data_size": 65536 00:16:36.075 }, 00:16:36.075 { 00:16:36.075 "name": "BaseBdev2", 00:16:36.075 "uuid": "287009e0-710c-5f4d-99fb-dc50e100fae5", 00:16:36.075 "is_configured": true, 00:16:36.075 "data_offset": 0, 00:16:36.075 "data_size": 65536 00:16:36.075 }, 00:16:36.075 { 00:16:36.075 "name": "BaseBdev3", 00:16:36.075 "uuid": "45d71ded-7ef1-568e-868f-1956a89a829c", 00:16:36.075 "is_configured": true, 00:16:36.075 "data_offset": 0, 00:16:36.075 "data_size": 65536 00:16:36.075 }, 00:16:36.075 { 00:16:36.075 "name": "BaseBdev4", 00:16:36.075 "uuid": "eee64eb7-128f-547b-94c2-e7d374275a97", 00:16:36.075 "is_configured": true, 00:16:36.075 "data_offset": 0, 00:16:36.075 "data_size": 65536 00:16:36.075 } 00:16:36.075 ] 00:16:36.075 }' 00:16:36.075 15:24:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:36.075 15:24:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:36.334 15:24:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:36.334 15:24:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.334 15:24:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:36.334 [2024-11-20 15:24:22.698875] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:36.334 [2024-11-20 15:24:22.699242] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:36.334 [2024-11-20 15:24:22.699364] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:36.334 [2024-11-20 15:24:22.699472] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:36.334 [2024-11-20 15:24:22.699486] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:36.334 15:24:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.334 15:24:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:36.334 15:24:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.334 15:24:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:36.334 15:24:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:16:36.334 15:24:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.334 15:24:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:16:36.334 15:24:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:16:36.334 15:24:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:16:36.334 15:24:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:16:36.334 15:24:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:36.334 15:24:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:16:36.334 15:24:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:36.334 15:24:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:36.334 15:24:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:36.334 15:24:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:16:36.334 15:24:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:36.335 15:24:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:36.335 15:24:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:16:36.594 /dev/nbd0 00:16:36.594 15:24:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:36.594 15:24:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:36.594 15:24:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:36.594 15:24:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:16:36.594 15:24:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:36.594 15:24:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:36.594 15:24:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:36.594 15:24:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:16:36.594 15:24:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:36.594 15:24:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:36.594 15:24:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:36.594 1+0 records in 00:16:36.594 1+0 records out 00:16:36.594 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000329539 s, 12.4 MB/s 00:16:36.594 15:24:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:36.594 15:24:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:16:36.594 15:24:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:36.594 15:24:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:36.594 15:24:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:16:36.594 15:24:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:36.594 15:24:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:36.594 15:24:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:16:36.853 /dev/nbd1 00:16:36.853 15:24:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:36.853 15:24:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:36.853 15:24:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:16:36.853 15:24:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:16:36.853 15:24:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:36.853 15:24:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:36.853 15:24:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:16:36.853 15:24:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:16:36.853 15:24:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:36.853 15:24:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:36.853 15:24:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:36.853 1+0 records in 00:16:36.853 1+0 records out 00:16:36.853 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000376453 s, 10.9 MB/s 00:16:36.853 15:24:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:36.853 15:24:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:16:36.853 15:24:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:36.853 15:24:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:36.853 15:24:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:16:36.853 15:24:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:36.853 15:24:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:36.853 15:24:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:16:37.112 15:24:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:16:37.112 15:24:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:37.112 15:24:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:37.112 15:24:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:37.112 15:24:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:16:37.112 15:24:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:37.112 15:24:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:37.371 15:24:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:37.371 15:24:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:37.371 15:24:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:37.371 15:24:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:37.371 15:24:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:37.371 15:24:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:37.371 15:24:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:16:37.371 15:24:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:16:37.371 15:24:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:37.371 15:24:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:37.631 15:24:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:37.631 15:24:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:37.631 15:24:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:37.631 15:24:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:37.631 15:24:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:37.631 15:24:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:37.631 15:24:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:16:37.631 15:24:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:16:37.631 15:24:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:16:37.631 15:24:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 84402 00:16:37.631 15:24:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 84402 ']' 00:16:37.631 15:24:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 84402 00:16:37.631 15:24:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:16:37.631 15:24:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:37.631 15:24:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84402 00:16:37.631 killing process with pid 84402 00:16:37.631 Received shutdown signal, test time was about 60.000000 seconds 00:16:37.631 00:16:37.631 Latency(us) 00:16:37.631 [2024-11-20T15:24:24.113Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:37.631 [2024-11-20T15:24:24.113Z] =================================================================================================================== 00:16:37.631 [2024-11-20T15:24:24.113Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:37.631 15:24:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:37.631 15:24:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:37.631 15:24:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84402' 00:16:37.631 15:24:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@973 -- # kill 84402 00:16:37.631 [2024-11-20 15:24:23.988948] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:37.631 15:24:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@978 -- # wait 84402 00:16:38.199 [2024-11-20 15:24:24.483650] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:39.136 15:24:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:16:39.136 00:16:39.136 real 0m20.112s 00:16:39.136 user 0m23.843s 00:16:39.136 sys 0m2.587s 00:16:39.136 15:24:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:39.136 ************************************ 00:16:39.136 END TEST raid5f_rebuild_test 00:16:39.136 ************************************ 00:16:39.136 15:24:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:39.395 15:24:25 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 4 true false true 00:16:39.395 15:24:25 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:16:39.395 15:24:25 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:39.395 15:24:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:39.395 ************************************ 00:16:39.395 START TEST raid5f_rebuild_test_sb 00:16:39.395 ************************************ 00:16:39.395 15:24:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 4 true false true 00:16:39.395 15:24:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:16:39.395 15:24:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:16:39.395 15:24:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:16:39.395 15:24:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:16:39.395 15:24:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:16:39.395 15:24:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:16:39.395 15:24:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:39.395 15:24:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:16:39.395 15:24:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:39.395 15:24:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:39.395 15:24:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:16:39.395 15:24:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:39.395 15:24:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:39.395 15:24:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:16:39.395 15:24:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:39.395 15:24:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:39.395 15:24:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:16:39.395 15:24:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:39.395 15:24:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:39.395 15:24:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:39.395 15:24:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:16:39.395 15:24:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:16:39.395 15:24:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:16:39.395 15:24:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:16:39.395 15:24:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:16:39.395 15:24:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:16:39.395 15:24:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:16:39.395 15:24:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:16:39.395 15:24:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:16:39.395 15:24:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:16:39.395 15:24:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:16:39.395 15:24:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:16:39.395 15:24:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=84924 00:16:39.395 15:24:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:16:39.395 15:24:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 84924 00:16:39.395 15:24:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 84924 ']' 00:16:39.395 15:24:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:39.395 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:39.395 15:24:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:39.395 15:24:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:39.395 15:24:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:39.395 15:24:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:39.395 [2024-11-20 15:24:25.796294] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:16:39.395 I/O size of 3145728 is greater than zero copy threshold (65536). 00:16:39.395 Zero copy mechanism will not be used. 00:16:39.396 [2024-11-20 15:24:25.797821] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84924 ] 00:16:39.655 [2024-11-20 15:24:25.994385] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:39.655 [2024-11-20 15:24:26.115713] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:39.914 [2024-11-20 15:24:26.327868] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:39.914 [2024-11-20 15:24:26.327941] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:40.173 15:24:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:40.173 15:24:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:16:40.173 15:24:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:40.173 15:24:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:40.173 15:24:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.173 15:24:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:40.433 BaseBdev1_malloc 00:16:40.433 15:24:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.433 15:24:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:40.433 15:24:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.433 15:24:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:40.433 [2024-11-20 15:24:26.695875] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:40.433 [2024-11-20 15:24:26.695960] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:40.433 [2024-11-20 15:24:26.695986] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:40.433 [2024-11-20 15:24:26.696001] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:40.433 [2024-11-20 15:24:26.698484] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:40.433 [2024-11-20 15:24:26.698722] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:40.433 BaseBdev1 00:16:40.433 15:24:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.433 15:24:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:40.433 15:24:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:40.433 15:24:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.433 15:24:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:40.433 BaseBdev2_malloc 00:16:40.433 15:24:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.433 15:24:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:16:40.433 15:24:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.433 15:24:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:40.433 [2024-11-20 15:24:26.755608] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:16:40.433 [2024-11-20 15:24:26.755724] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:40.433 [2024-11-20 15:24:26.755752] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:40.433 [2024-11-20 15:24:26.755767] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:40.433 [2024-11-20 15:24:26.758221] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:40.433 [2024-11-20 15:24:26.758409] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:40.433 BaseBdev2 00:16:40.433 15:24:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.433 15:24:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:40.433 15:24:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:16:40.433 15:24:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.433 15:24:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:40.433 BaseBdev3_malloc 00:16:40.433 15:24:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.433 15:24:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:16:40.433 15:24:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.433 15:24:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:40.433 [2024-11-20 15:24:26.825693] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:16:40.433 [2024-11-20 15:24:26.825764] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:40.433 [2024-11-20 15:24:26.825788] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:40.433 [2024-11-20 15:24:26.825802] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:40.433 [2024-11-20 15:24:26.828205] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:40.433 [2024-11-20 15:24:26.828251] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:16:40.433 BaseBdev3 00:16:40.433 15:24:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.433 15:24:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:40.433 15:24:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:16:40.433 15:24:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.433 15:24:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:40.433 BaseBdev4_malloc 00:16:40.433 15:24:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.433 15:24:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:16:40.433 15:24:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.433 15:24:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:40.433 [2024-11-20 15:24:26.879827] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:16:40.433 [2024-11-20 15:24:26.879908] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:40.433 [2024-11-20 15:24:26.879933] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:16:40.433 [2024-11-20 15:24:26.879947] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:40.433 [2024-11-20 15:24:26.882332] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:40.433 [2024-11-20 15:24:26.882490] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:16:40.433 BaseBdev4 00:16:40.433 15:24:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.433 15:24:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:16:40.433 15:24:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.433 15:24:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:40.694 spare_malloc 00:16:40.694 15:24:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.694 15:24:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:16:40.694 15:24:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.694 15:24:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:40.694 spare_delay 00:16:40.694 15:24:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.694 15:24:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:40.694 15:24:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.694 15:24:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:40.694 [2024-11-20 15:24:26.951800] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:40.694 [2024-11-20 15:24:26.951867] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:40.694 [2024-11-20 15:24:26.951889] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:16:40.694 [2024-11-20 15:24:26.951903] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:40.694 [2024-11-20 15:24:26.954297] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:40.694 [2024-11-20 15:24:26.954341] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:40.694 spare 00:16:40.694 15:24:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.694 15:24:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:16:40.694 15:24:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.694 15:24:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:40.694 [2024-11-20 15:24:26.963851] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:40.694 [2024-11-20 15:24:26.965945] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:40.694 [2024-11-20 15:24:26.966008] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:40.694 [2024-11-20 15:24:26.966060] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:40.694 [2024-11-20 15:24:26.966259] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:40.694 [2024-11-20 15:24:26.966276] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:40.694 [2024-11-20 15:24:26.966568] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:16:40.694 [2024-11-20 15:24:26.974900] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:40.694 [2024-11-20 15:24:26.975038] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:40.694 [2024-11-20 15:24:26.975378] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:40.694 15:24:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.694 15:24:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:16:40.694 15:24:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:40.694 15:24:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:40.694 15:24:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:40.694 15:24:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:40.694 15:24:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:40.694 15:24:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:40.694 15:24:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:40.694 15:24:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:40.694 15:24:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:40.694 15:24:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:40.694 15:24:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:40.694 15:24:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.694 15:24:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:40.694 15:24:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.694 15:24:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:40.694 "name": "raid_bdev1", 00:16:40.694 "uuid": "8e6f522e-374c-4b6e-afac-a6463c5d6bfb", 00:16:40.694 "strip_size_kb": 64, 00:16:40.694 "state": "online", 00:16:40.694 "raid_level": "raid5f", 00:16:40.694 "superblock": true, 00:16:40.694 "num_base_bdevs": 4, 00:16:40.694 "num_base_bdevs_discovered": 4, 00:16:40.694 "num_base_bdevs_operational": 4, 00:16:40.694 "base_bdevs_list": [ 00:16:40.694 { 00:16:40.694 "name": "BaseBdev1", 00:16:40.694 "uuid": "692ce804-3b87-5195-b365-a961a0d971fc", 00:16:40.694 "is_configured": true, 00:16:40.694 "data_offset": 2048, 00:16:40.694 "data_size": 63488 00:16:40.694 }, 00:16:40.694 { 00:16:40.694 "name": "BaseBdev2", 00:16:40.694 "uuid": "b9ea7341-daf6-5a4d-aa61-bf10a6d7449d", 00:16:40.694 "is_configured": true, 00:16:40.694 "data_offset": 2048, 00:16:40.694 "data_size": 63488 00:16:40.694 }, 00:16:40.694 { 00:16:40.694 "name": "BaseBdev3", 00:16:40.694 "uuid": "e8c54f98-d470-506c-a8ac-84bbf50a5152", 00:16:40.694 "is_configured": true, 00:16:40.694 "data_offset": 2048, 00:16:40.694 "data_size": 63488 00:16:40.694 }, 00:16:40.694 { 00:16:40.694 "name": "BaseBdev4", 00:16:40.694 "uuid": "69c62172-6a25-5b33-bd34-bacaaf3d1e51", 00:16:40.694 "is_configured": true, 00:16:40.694 "data_offset": 2048, 00:16:40.694 "data_size": 63488 00:16:40.694 } 00:16:40.694 ] 00:16:40.694 }' 00:16:40.694 15:24:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:40.694 15:24:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:40.953 15:24:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:40.953 15:24:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.953 15:24:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:40.953 15:24:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:16:40.953 [2024-11-20 15:24:27.379769] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:40.953 15:24:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.953 15:24:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=190464 00:16:40.953 15:24:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:40.953 15:24:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.953 15:24:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:16:40.953 15:24:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:41.212 15:24:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.212 15:24:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:16:41.212 15:24:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:16:41.212 15:24:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:16:41.212 15:24:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:16:41.212 15:24:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:16:41.212 15:24:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:41.212 15:24:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:16:41.212 15:24:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:41.212 15:24:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:41.212 15:24:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:41.212 15:24:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:16:41.212 15:24:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:41.212 15:24:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:41.212 15:24:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:16:41.212 [2024-11-20 15:24:27.659204] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:16:41.212 /dev/nbd0 00:16:41.471 15:24:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:41.472 15:24:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:41.472 15:24:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:41.472 15:24:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:16:41.472 15:24:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:41.472 15:24:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:41.472 15:24:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:41.472 15:24:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:16:41.472 15:24:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:41.472 15:24:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:41.472 15:24:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:41.472 1+0 records in 00:16:41.472 1+0 records out 00:16:41.472 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00028073 s, 14.6 MB/s 00:16:41.472 15:24:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:41.472 15:24:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:16:41.472 15:24:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:41.472 15:24:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:41.472 15:24:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:16:41.472 15:24:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:41.472 15:24:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:41.472 15:24:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:16:41.472 15:24:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:16:41.472 15:24:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 192 00:16:41.472 15:24:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=496 oflag=direct 00:16:42.039 496+0 records in 00:16:42.039 496+0 records out 00:16:42.039 97517568 bytes (98 MB, 93 MiB) copied, 0.481639 s, 202 MB/s 00:16:42.039 15:24:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:16:42.039 15:24:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:42.039 15:24:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:42.039 15:24:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:42.039 15:24:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:16:42.039 15:24:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:42.040 15:24:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:42.040 15:24:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:42.040 [2024-11-20 15:24:28.463326] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:42.040 15:24:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:42.040 15:24:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:42.040 15:24:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:42.040 15:24:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:42.040 15:24:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:42.040 15:24:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:16:42.040 15:24:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:16:42.040 15:24:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:16:42.040 15:24:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.040 15:24:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:42.040 [2024-11-20 15:24:28.480945] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:42.040 15:24:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.040 15:24:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:42.040 15:24:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:42.040 15:24:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:42.040 15:24:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:42.040 15:24:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:42.040 15:24:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:42.040 15:24:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:42.040 15:24:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:42.040 15:24:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:42.040 15:24:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:42.040 15:24:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:42.040 15:24:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.040 15:24:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:42.040 15:24:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:42.040 15:24:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.299 15:24:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:42.299 "name": "raid_bdev1", 00:16:42.299 "uuid": "8e6f522e-374c-4b6e-afac-a6463c5d6bfb", 00:16:42.299 "strip_size_kb": 64, 00:16:42.299 "state": "online", 00:16:42.299 "raid_level": "raid5f", 00:16:42.299 "superblock": true, 00:16:42.299 "num_base_bdevs": 4, 00:16:42.299 "num_base_bdevs_discovered": 3, 00:16:42.299 "num_base_bdevs_operational": 3, 00:16:42.299 "base_bdevs_list": [ 00:16:42.299 { 00:16:42.299 "name": null, 00:16:42.299 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:42.299 "is_configured": false, 00:16:42.299 "data_offset": 0, 00:16:42.299 "data_size": 63488 00:16:42.299 }, 00:16:42.299 { 00:16:42.299 "name": "BaseBdev2", 00:16:42.299 "uuid": "b9ea7341-daf6-5a4d-aa61-bf10a6d7449d", 00:16:42.299 "is_configured": true, 00:16:42.299 "data_offset": 2048, 00:16:42.299 "data_size": 63488 00:16:42.299 }, 00:16:42.299 { 00:16:42.299 "name": "BaseBdev3", 00:16:42.299 "uuid": "e8c54f98-d470-506c-a8ac-84bbf50a5152", 00:16:42.299 "is_configured": true, 00:16:42.299 "data_offset": 2048, 00:16:42.299 "data_size": 63488 00:16:42.299 }, 00:16:42.299 { 00:16:42.299 "name": "BaseBdev4", 00:16:42.299 "uuid": "69c62172-6a25-5b33-bd34-bacaaf3d1e51", 00:16:42.299 "is_configured": true, 00:16:42.299 "data_offset": 2048, 00:16:42.299 "data_size": 63488 00:16:42.299 } 00:16:42.299 ] 00:16:42.299 }' 00:16:42.299 15:24:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:42.299 15:24:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:42.558 15:24:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:42.558 15:24:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.558 15:24:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:42.558 [2024-11-20 15:24:28.884368] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:42.558 [2024-11-20 15:24:28.900277] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002aa50 00:16:42.558 15:24:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.558 15:24:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:16:42.558 [2024-11-20 15:24:28.910575] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:43.496 15:24:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:43.496 15:24:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:43.496 15:24:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:43.496 15:24:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:43.496 15:24:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:43.496 15:24:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:43.496 15:24:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.496 15:24:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:43.496 15:24:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:43.496 15:24:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.496 15:24:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:43.496 "name": "raid_bdev1", 00:16:43.496 "uuid": "8e6f522e-374c-4b6e-afac-a6463c5d6bfb", 00:16:43.496 "strip_size_kb": 64, 00:16:43.496 "state": "online", 00:16:43.496 "raid_level": "raid5f", 00:16:43.496 "superblock": true, 00:16:43.496 "num_base_bdevs": 4, 00:16:43.496 "num_base_bdevs_discovered": 4, 00:16:43.496 "num_base_bdevs_operational": 4, 00:16:43.496 "process": { 00:16:43.496 "type": "rebuild", 00:16:43.496 "target": "spare", 00:16:43.496 "progress": { 00:16:43.496 "blocks": 17280, 00:16:43.496 "percent": 9 00:16:43.496 } 00:16:43.496 }, 00:16:43.496 "base_bdevs_list": [ 00:16:43.496 { 00:16:43.496 "name": "spare", 00:16:43.496 "uuid": "d95caf0f-3e00-59e2-97e9-97993e864fc4", 00:16:43.496 "is_configured": true, 00:16:43.496 "data_offset": 2048, 00:16:43.496 "data_size": 63488 00:16:43.496 }, 00:16:43.496 { 00:16:43.496 "name": "BaseBdev2", 00:16:43.496 "uuid": "b9ea7341-daf6-5a4d-aa61-bf10a6d7449d", 00:16:43.496 "is_configured": true, 00:16:43.496 "data_offset": 2048, 00:16:43.496 "data_size": 63488 00:16:43.496 }, 00:16:43.496 { 00:16:43.496 "name": "BaseBdev3", 00:16:43.496 "uuid": "e8c54f98-d470-506c-a8ac-84bbf50a5152", 00:16:43.496 "is_configured": true, 00:16:43.496 "data_offset": 2048, 00:16:43.496 "data_size": 63488 00:16:43.496 }, 00:16:43.496 { 00:16:43.496 "name": "BaseBdev4", 00:16:43.496 "uuid": "69c62172-6a25-5b33-bd34-bacaaf3d1e51", 00:16:43.496 "is_configured": true, 00:16:43.496 "data_offset": 2048, 00:16:43.496 "data_size": 63488 00:16:43.496 } 00:16:43.496 ] 00:16:43.496 }' 00:16:43.496 15:24:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:43.755 15:24:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:43.755 15:24:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:43.755 15:24:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:43.755 15:24:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:43.755 15:24:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.755 15:24:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:43.755 [2024-11-20 15:24:30.049812] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:43.755 [2024-11-20 15:24:30.119716] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:43.755 [2024-11-20 15:24:30.119817] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:43.755 [2024-11-20 15:24:30.119838] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:43.755 [2024-11-20 15:24:30.119850] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:43.755 15:24:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.755 15:24:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:43.755 15:24:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:43.755 15:24:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:43.755 15:24:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:43.755 15:24:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:43.755 15:24:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:43.755 15:24:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:43.755 15:24:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:43.755 15:24:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:43.755 15:24:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:43.755 15:24:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:43.755 15:24:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:43.755 15:24:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.755 15:24:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:43.755 15:24:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.755 15:24:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:43.755 "name": "raid_bdev1", 00:16:43.755 "uuid": "8e6f522e-374c-4b6e-afac-a6463c5d6bfb", 00:16:43.755 "strip_size_kb": 64, 00:16:43.755 "state": "online", 00:16:43.755 "raid_level": "raid5f", 00:16:43.755 "superblock": true, 00:16:43.755 "num_base_bdevs": 4, 00:16:43.755 "num_base_bdevs_discovered": 3, 00:16:43.755 "num_base_bdevs_operational": 3, 00:16:43.755 "base_bdevs_list": [ 00:16:43.755 { 00:16:43.755 "name": null, 00:16:43.755 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:43.755 "is_configured": false, 00:16:43.755 "data_offset": 0, 00:16:43.755 "data_size": 63488 00:16:43.755 }, 00:16:43.755 { 00:16:43.755 "name": "BaseBdev2", 00:16:43.755 "uuid": "b9ea7341-daf6-5a4d-aa61-bf10a6d7449d", 00:16:43.755 "is_configured": true, 00:16:43.755 "data_offset": 2048, 00:16:43.755 "data_size": 63488 00:16:43.755 }, 00:16:43.755 { 00:16:43.755 "name": "BaseBdev3", 00:16:43.755 "uuid": "e8c54f98-d470-506c-a8ac-84bbf50a5152", 00:16:43.755 "is_configured": true, 00:16:43.755 "data_offset": 2048, 00:16:43.755 "data_size": 63488 00:16:43.755 }, 00:16:43.755 { 00:16:43.755 "name": "BaseBdev4", 00:16:43.755 "uuid": "69c62172-6a25-5b33-bd34-bacaaf3d1e51", 00:16:43.755 "is_configured": true, 00:16:43.755 "data_offset": 2048, 00:16:43.755 "data_size": 63488 00:16:43.755 } 00:16:43.755 ] 00:16:43.755 }' 00:16:43.755 15:24:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:43.755 15:24:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:44.325 15:24:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:44.325 15:24:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:44.325 15:24:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:44.325 15:24:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:44.325 15:24:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:44.325 15:24:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:44.325 15:24:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.325 15:24:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:44.325 15:24:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:44.325 15:24:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.325 15:24:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:44.325 "name": "raid_bdev1", 00:16:44.325 "uuid": "8e6f522e-374c-4b6e-afac-a6463c5d6bfb", 00:16:44.325 "strip_size_kb": 64, 00:16:44.325 "state": "online", 00:16:44.325 "raid_level": "raid5f", 00:16:44.325 "superblock": true, 00:16:44.325 "num_base_bdevs": 4, 00:16:44.325 "num_base_bdevs_discovered": 3, 00:16:44.325 "num_base_bdevs_operational": 3, 00:16:44.325 "base_bdevs_list": [ 00:16:44.325 { 00:16:44.325 "name": null, 00:16:44.325 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:44.325 "is_configured": false, 00:16:44.325 "data_offset": 0, 00:16:44.325 "data_size": 63488 00:16:44.325 }, 00:16:44.325 { 00:16:44.325 "name": "BaseBdev2", 00:16:44.325 "uuid": "b9ea7341-daf6-5a4d-aa61-bf10a6d7449d", 00:16:44.325 "is_configured": true, 00:16:44.325 "data_offset": 2048, 00:16:44.325 "data_size": 63488 00:16:44.325 }, 00:16:44.325 { 00:16:44.325 "name": "BaseBdev3", 00:16:44.325 "uuid": "e8c54f98-d470-506c-a8ac-84bbf50a5152", 00:16:44.325 "is_configured": true, 00:16:44.325 "data_offset": 2048, 00:16:44.325 "data_size": 63488 00:16:44.325 }, 00:16:44.325 { 00:16:44.325 "name": "BaseBdev4", 00:16:44.325 "uuid": "69c62172-6a25-5b33-bd34-bacaaf3d1e51", 00:16:44.325 "is_configured": true, 00:16:44.325 "data_offset": 2048, 00:16:44.325 "data_size": 63488 00:16:44.325 } 00:16:44.325 ] 00:16:44.325 }' 00:16:44.325 15:24:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:44.325 15:24:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:44.325 15:24:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:44.325 15:24:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:44.325 15:24:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:44.325 15:24:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.325 15:24:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:44.325 [2024-11-20 15:24:30.692699] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:44.325 [2024-11-20 15:24:30.710267] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002ab20 00:16:44.325 15:24:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.325 15:24:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:16:44.325 [2024-11-20 15:24:30.720484] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:45.262 15:24:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:45.262 15:24:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:45.262 15:24:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:45.262 15:24:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:45.262 15:24:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:45.262 15:24:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:45.262 15:24:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:45.262 15:24:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.262 15:24:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:45.522 15:24:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.522 15:24:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:45.522 "name": "raid_bdev1", 00:16:45.522 "uuid": "8e6f522e-374c-4b6e-afac-a6463c5d6bfb", 00:16:45.522 "strip_size_kb": 64, 00:16:45.522 "state": "online", 00:16:45.522 "raid_level": "raid5f", 00:16:45.522 "superblock": true, 00:16:45.522 "num_base_bdevs": 4, 00:16:45.522 "num_base_bdevs_discovered": 4, 00:16:45.522 "num_base_bdevs_operational": 4, 00:16:45.522 "process": { 00:16:45.522 "type": "rebuild", 00:16:45.522 "target": "spare", 00:16:45.522 "progress": { 00:16:45.522 "blocks": 19200, 00:16:45.522 "percent": 10 00:16:45.522 } 00:16:45.522 }, 00:16:45.522 "base_bdevs_list": [ 00:16:45.522 { 00:16:45.522 "name": "spare", 00:16:45.522 "uuid": "d95caf0f-3e00-59e2-97e9-97993e864fc4", 00:16:45.522 "is_configured": true, 00:16:45.522 "data_offset": 2048, 00:16:45.522 "data_size": 63488 00:16:45.522 }, 00:16:45.522 { 00:16:45.522 "name": "BaseBdev2", 00:16:45.522 "uuid": "b9ea7341-daf6-5a4d-aa61-bf10a6d7449d", 00:16:45.522 "is_configured": true, 00:16:45.522 "data_offset": 2048, 00:16:45.522 "data_size": 63488 00:16:45.522 }, 00:16:45.522 { 00:16:45.522 "name": "BaseBdev3", 00:16:45.522 "uuid": "e8c54f98-d470-506c-a8ac-84bbf50a5152", 00:16:45.522 "is_configured": true, 00:16:45.522 "data_offset": 2048, 00:16:45.522 "data_size": 63488 00:16:45.522 }, 00:16:45.522 { 00:16:45.522 "name": "BaseBdev4", 00:16:45.522 "uuid": "69c62172-6a25-5b33-bd34-bacaaf3d1e51", 00:16:45.522 "is_configured": true, 00:16:45.522 "data_offset": 2048, 00:16:45.522 "data_size": 63488 00:16:45.522 } 00:16:45.522 ] 00:16:45.522 }' 00:16:45.522 15:24:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:45.522 15:24:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:45.522 15:24:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:45.522 15:24:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:45.522 15:24:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:16:45.522 15:24:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:16:45.522 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:16:45.522 15:24:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:16:45.522 15:24:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:16:45.522 15:24:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=635 00:16:45.522 15:24:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:45.522 15:24:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:45.522 15:24:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:45.522 15:24:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:45.522 15:24:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:45.522 15:24:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:45.522 15:24:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:45.522 15:24:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.522 15:24:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:45.522 15:24:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:45.522 15:24:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.522 15:24:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:45.522 "name": "raid_bdev1", 00:16:45.522 "uuid": "8e6f522e-374c-4b6e-afac-a6463c5d6bfb", 00:16:45.522 "strip_size_kb": 64, 00:16:45.522 "state": "online", 00:16:45.522 "raid_level": "raid5f", 00:16:45.522 "superblock": true, 00:16:45.522 "num_base_bdevs": 4, 00:16:45.522 "num_base_bdevs_discovered": 4, 00:16:45.522 "num_base_bdevs_operational": 4, 00:16:45.522 "process": { 00:16:45.522 "type": "rebuild", 00:16:45.522 "target": "spare", 00:16:45.522 "progress": { 00:16:45.522 "blocks": 21120, 00:16:45.522 "percent": 11 00:16:45.522 } 00:16:45.522 }, 00:16:45.522 "base_bdevs_list": [ 00:16:45.522 { 00:16:45.522 "name": "spare", 00:16:45.522 "uuid": "d95caf0f-3e00-59e2-97e9-97993e864fc4", 00:16:45.522 "is_configured": true, 00:16:45.522 "data_offset": 2048, 00:16:45.522 "data_size": 63488 00:16:45.522 }, 00:16:45.522 { 00:16:45.522 "name": "BaseBdev2", 00:16:45.522 "uuid": "b9ea7341-daf6-5a4d-aa61-bf10a6d7449d", 00:16:45.522 "is_configured": true, 00:16:45.522 "data_offset": 2048, 00:16:45.522 "data_size": 63488 00:16:45.522 }, 00:16:45.522 { 00:16:45.522 "name": "BaseBdev3", 00:16:45.522 "uuid": "e8c54f98-d470-506c-a8ac-84bbf50a5152", 00:16:45.522 "is_configured": true, 00:16:45.522 "data_offset": 2048, 00:16:45.522 "data_size": 63488 00:16:45.522 }, 00:16:45.522 { 00:16:45.522 "name": "BaseBdev4", 00:16:45.522 "uuid": "69c62172-6a25-5b33-bd34-bacaaf3d1e51", 00:16:45.522 "is_configured": true, 00:16:45.522 "data_offset": 2048, 00:16:45.522 "data_size": 63488 00:16:45.522 } 00:16:45.522 ] 00:16:45.522 }' 00:16:45.522 15:24:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:45.522 15:24:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:45.522 15:24:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:45.522 15:24:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:45.522 15:24:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:46.899 15:24:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:46.899 15:24:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:46.899 15:24:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:46.899 15:24:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:46.899 15:24:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:46.899 15:24:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:46.899 15:24:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:46.899 15:24:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.899 15:24:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:46.899 15:24:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:46.899 15:24:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.899 15:24:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:46.899 "name": "raid_bdev1", 00:16:46.899 "uuid": "8e6f522e-374c-4b6e-afac-a6463c5d6bfb", 00:16:46.899 "strip_size_kb": 64, 00:16:46.899 "state": "online", 00:16:46.899 "raid_level": "raid5f", 00:16:46.899 "superblock": true, 00:16:46.899 "num_base_bdevs": 4, 00:16:46.899 "num_base_bdevs_discovered": 4, 00:16:46.899 "num_base_bdevs_operational": 4, 00:16:46.899 "process": { 00:16:46.899 "type": "rebuild", 00:16:46.899 "target": "spare", 00:16:46.899 "progress": { 00:16:46.899 "blocks": 42240, 00:16:46.899 "percent": 22 00:16:46.899 } 00:16:46.899 }, 00:16:46.899 "base_bdevs_list": [ 00:16:46.899 { 00:16:46.899 "name": "spare", 00:16:46.899 "uuid": "d95caf0f-3e00-59e2-97e9-97993e864fc4", 00:16:46.899 "is_configured": true, 00:16:46.899 "data_offset": 2048, 00:16:46.899 "data_size": 63488 00:16:46.899 }, 00:16:46.899 { 00:16:46.899 "name": "BaseBdev2", 00:16:46.899 "uuid": "b9ea7341-daf6-5a4d-aa61-bf10a6d7449d", 00:16:46.899 "is_configured": true, 00:16:46.899 "data_offset": 2048, 00:16:46.899 "data_size": 63488 00:16:46.899 }, 00:16:46.899 { 00:16:46.899 "name": "BaseBdev3", 00:16:46.899 "uuid": "e8c54f98-d470-506c-a8ac-84bbf50a5152", 00:16:46.899 "is_configured": true, 00:16:46.899 "data_offset": 2048, 00:16:46.899 "data_size": 63488 00:16:46.899 }, 00:16:46.899 { 00:16:46.899 "name": "BaseBdev4", 00:16:46.899 "uuid": "69c62172-6a25-5b33-bd34-bacaaf3d1e51", 00:16:46.899 "is_configured": true, 00:16:46.899 "data_offset": 2048, 00:16:46.899 "data_size": 63488 00:16:46.899 } 00:16:46.899 ] 00:16:46.899 }' 00:16:46.899 15:24:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:46.899 15:24:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:46.899 15:24:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:46.899 15:24:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:46.899 15:24:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:47.868 15:24:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:47.868 15:24:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:47.868 15:24:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:47.868 15:24:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:47.868 15:24:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:47.868 15:24:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:47.868 15:24:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:47.868 15:24:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.868 15:24:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:47.868 15:24:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:47.868 15:24:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.868 15:24:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:47.868 "name": "raid_bdev1", 00:16:47.868 "uuid": "8e6f522e-374c-4b6e-afac-a6463c5d6bfb", 00:16:47.868 "strip_size_kb": 64, 00:16:47.868 "state": "online", 00:16:47.868 "raid_level": "raid5f", 00:16:47.868 "superblock": true, 00:16:47.868 "num_base_bdevs": 4, 00:16:47.868 "num_base_bdevs_discovered": 4, 00:16:47.868 "num_base_bdevs_operational": 4, 00:16:47.868 "process": { 00:16:47.868 "type": "rebuild", 00:16:47.868 "target": "spare", 00:16:47.868 "progress": { 00:16:47.868 "blocks": 63360, 00:16:47.868 "percent": 33 00:16:47.868 } 00:16:47.868 }, 00:16:47.868 "base_bdevs_list": [ 00:16:47.868 { 00:16:47.868 "name": "spare", 00:16:47.868 "uuid": "d95caf0f-3e00-59e2-97e9-97993e864fc4", 00:16:47.869 "is_configured": true, 00:16:47.869 "data_offset": 2048, 00:16:47.869 "data_size": 63488 00:16:47.869 }, 00:16:47.869 { 00:16:47.869 "name": "BaseBdev2", 00:16:47.869 "uuid": "b9ea7341-daf6-5a4d-aa61-bf10a6d7449d", 00:16:47.869 "is_configured": true, 00:16:47.869 "data_offset": 2048, 00:16:47.869 "data_size": 63488 00:16:47.869 }, 00:16:47.869 { 00:16:47.869 "name": "BaseBdev3", 00:16:47.869 "uuid": "e8c54f98-d470-506c-a8ac-84bbf50a5152", 00:16:47.869 "is_configured": true, 00:16:47.869 "data_offset": 2048, 00:16:47.869 "data_size": 63488 00:16:47.869 }, 00:16:47.869 { 00:16:47.869 "name": "BaseBdev4", 00:16:47.869 "uuid": "69c62172-6a25-5b33-bd34-bacaaf3d1e51", 00:16:47.869 "is_configured": true, 00:16:47.869 "data_offset": 2048, 00:16:47.869 "data_size": 63488 00:16:47.869 } 00:16:47.869 ] 00:16:47.869 }' 00:16:47.869 15:24:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:47.869 15:24:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:47.869 15:24:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:47.869 15:24:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:47.869 15:24:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:48.806 15:24:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:48.806 15:24:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:48.806 15:24:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:48.806 15:24:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:48.806 15:24:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:48.806 15:24:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:48.806 15:24:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:48.806 15:24:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.806 15:24:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:48.806 15:24:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:48.806 15:24:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.806 15:24:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:48.806 "name": "raid_bdev1", 00:16:48.806 "uuid": "8e6f522e-374c-4b6e-afac-a6463c5d6bfb", 00:16:48.806 "strip_size_kb": 64, 00:16:48.806 "state": "online", 00:16:48.806 "raid_level": "raid5f", 00:16:48.806 "superblock": true, 00:16:48.806 "num_base_bdevs": 4, 00:16:48.806 "num_base_bdevs_discovered": 4, 00:16:48.806 "num_base_bdevs_operational": 4, 00:16:48.806 "process": { 00:16:48.806 "type": "rebuild", 00:16:48.806 "target": "spare", 00:16:48.806 "progress": { 00:16:48.806 "blocks": 84480, 00:16:48.806 "percent": 44 00:16:48.806 } 00:16:48.806 }, 00:16:48.806 "base_bdevs_list": [ 00:16:48.806 { 00:16:48.806 "name": "spare", 00:16:48.806 "uuid": "d95caf0f-3e00-59e2-97e9-97993e864fc4", 00:16:48.806 "is_configured": true, 00:16:48.806 "data_offset": 2048, 00:16:48.806 "data_size": 63488 00:16:48.806 }, 00:16:48.806 { 00:16:48.806 "name": "BaseBdev2", 00:16:48.806 "uuid": "b9ea7341-daf6-5a4d-aa61-bf10a6d7449d", 00:16:48.806 "is_configured": true, 00:16:48.806 "data_offset": 2048, 00:16:48.806 "data_size": 63488 00:16:48.806 }, 00:16:48.806 { 00:16:48.806 "name": "BaseBdev3", 00:16:48.806 "uuid": "e8c54f98-d470-506c-a8ac-84bbf50a5152", 00:16:48.806 "is_configured": true, 00:16:48.806 "data_offset": 2048, 00:16:48.806 "data_size": 63488 00:16:48.806 }, 00:16:48.806 { 00:16:48.806 "name": "BaseBdev4", 00:16:48.806 "uuid": "69c62172-6a25-5b33-bd34-bacaaf3d1e51", 00:16:48.806 "is_configured": true, 00:16:48.806 "data_offset": 2048, 00:16:48.806 "data_size": 63488 00:16:48.806 } 00:16:48.806 ] 00:16:48.806 }' 00:16:48.806 15:24:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:48.806 15:24:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:48.806 15:24:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:49.065 15:24:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:49.065 15:24:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:50.003 15:24:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:50.003 15:24:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:50.003 15:24:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:50.003 15:24:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:50.003 15:24:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:50.003 15:24:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:50.003 15:24:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:50.003 15:24:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.003 15:24:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:50.003 15:24:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:50.003 15:24:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.004 15:24:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:50.004 "name": "raid_bdev1", 00:16:50.004 "uuid": "8e6f522e-374c-4b6e-afac-a6463c5d6bfb", 00:16:50.004 "strip_size_kb": 64, 00:16:50.004 "state": "online", 00:16:50.004 "raid_level": "raid5f", 00:16:50.004 "superblock": true, 00:16:50.004 "num_base_bdevs": 4, 00:16:50.004 "num_base_bdevs_discovered": 4, 00:16:50.004 "num_base_bdevs_operational": 4, 00:16:50.004 "process": { 00:16:50.004 "type": "rebuild", 00:16:50.004 "target": "spare", 00:16:50.004 "progress": { 00:16:50.004 "blocks": 105600, 00:16:50.004 "percent": 55 00:16:50.004 } 00:16:50.004 }, 00:16:50.004 "base_bdevs_list": [ 00:16:50.004 { 00:16:50.004 "name": "spare", 00:16:50.004 "uuid": "d95caf0f-3e00-59e2-97e9-97993e864fc4", 00:16:50.004 "is_configured": true, 00:16:50.004 "data_offset": 2048, 00:16:50.004 "data_size": 63488 00:16:50.004 }, 00:16:50.004 { 00:16:50.004 "name": "BaseBdev2", 00:16:50.004 "uuid": "b9ea7341-daf6-5a4d-aa61-bf10a6d7449d", 00:16:50.004 "is_configured": true, 00:16:50.004 "data_offset": 2048, 00:16:50.004 "data_size": 63488 00:16:50.004 }, 00:16:50.004 { 00:16:50.004 "name": "BaseBdev3", 00:16:50.004 "uuid": "e8c54f98-d470-506c-a8ac-84bbf50a5152", 00:16:50.004 "is_configured": true, 00:16:50.004 "data_offset": 2048, 00:16:50.004 "data_size": 63488 00:16:50.004 }, 00:16:50.004 { 00:16:50.004 "name": "BaseBdev4", 00:16:50.004 "uuid": "69c62172-6a25-5b33-bd34-bacaaf3d1e51", 00:16:50.004 "is_configured": true, 00:16:50.004 "data_offset": 2048, 00:16:50.004 "data_size": 63488 00:16:50.004 } 00:16:50.004 ] 00:16:50.004 }' 00:16:50.004 15:24:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:50.004 15:24:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:50.004 15:24:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:50.004 15:24:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:50.004 15:24:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:51.379 15:24:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:51.379 15:24:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:51.379 15:24:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:51.379 15:24:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:51.379 15:24:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:51.379 15:24:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:51.379 15:24:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:51.379 15:24:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.379 15:24:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:51.379 15:24:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:51.379 15:24:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.379 15:24:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:51.379 "name": "raid_bdev1", 00:16:51.379 "uuid": "8e6f522e-374c-4b6e-afac-a6463c5d6bfb", 00:16:51.379 "strip_size_kb": 64, 00:16:51.379 "state": "online", 00:16:51.379 "raid_level": "raid5f", 00:16:51.379 "superblock": true, 00:16:51.379 "num_base_bdevs": 4, 00:16:51.379 "num_base_bdevs_discovered": 4, 00:16:51.379 "num_base_bdevs_operational": 4, 00:16:51.379 "process": { 00:16:51.379 "type": "rebuild", 00:16:51.379 "target": "spare", 00:16:51.379 "progress": { 00:16:51.379 "blocks": 126720, 00:16:51.379 "percent": 66 00:16:51.379 } 00:16:51.379 }, 00:16:51.379 "base_bdevs_list": [ 00:16:51.379 { 00:16:51.379 "name": "spare", 00:16:51.379 "uuid": "d95caf0f-3e00-59e2-97e9-97993e864fc4", 00:16:51.379 "is_configured": true, 00:16:51.379 "data_offset": 2048, 00:16:51.379 "data_size": 63488 00:16:51.379 }, 00:16:51.379 { 00:16:51.379 "name": "BaseBdev2", 00:16:51.379 "uuid": "b9ea7341-daf6-5a4d-aa61-bf10a6d7449d", 00:16:51.379 "is_configured": true, 00:16:51.379 "data_offset": 2048, 00:16:51.379 "data_size": 63488 00:16:51.379 }, 00:16:51.379 { 00:16:51.379 "name": "BaseBdev3", 00:16:51.379 "uuid": "e8c54f98-d470-506c-a8ac-84bbf50a5152", 00:16:51.379 "is_configured": true, 00:16:51.379 "data_offset": 2048, 00:16:51.379 "data_size": 63488 00:16:51.379 }, 00:16:51.379 { 00:16:51.379 "name": "BaseBdev4", 00:16:51.379 "uuid": "69c62172-6a25-5b33-bd34-bacaaf3d1e51", 00:16:51.379 "is_configured": true, 00:16:51.379 "data_offset": 2048, 00:16:51.379 "data_size": 63488 00:16:51.379 } 00:16:51.379 ] 00:16:51.379 }' 00:16:51.379 15:24:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:51.379 15:24:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:51.379 15:24:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:51.379 15:24:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:51.379 15:24:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:52.316 15:24:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:52.316 15:24:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:52.316 15:24:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:52.316 15:24:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:52.316 15:24:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:52.316 15:24:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:52.316 15:24:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:52.316 15:24:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:52.316 15:24:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.316 15:24:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:52.316 15:24:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.316 15:24:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:52.316 "name": "raid_bdev1", 00:16:52.316 "uuid": "8e6f522e-374c-4b6e-afac-a6463c5d6bfb", 00:16:52.316 "strip_size_kb": 64, 00:16:52.316 "state": "online", 00:16:52.316 "raid_level": "raid5f", 00:16:52.316 "superblock": true, 00:16:52.316 "num_base_bdevs": 4, 00:16:52.317 "num_base_bdevs_discovered": 4, 00:16:52.317 "num_base_bdevs_operational": 4, 00:16:52.317 "process": { 00:16:52.317 "type": "rebuild", 00:16:52.317 "target": "spare", 00:16:52.317 "progress": { 00:16:52.317 "blocks": 149760, 00:16:52.317 "percent": 78 00:16:52.317 } 00:16:52.317 }, 00:16:52.317 "base_bdevs_list": [ 00:16:52.317 { 00:16:52.317 "name": "spare", 00:16:52.317 "uuid": "d95caf0f-3e00-59e2-97e9-97993e864fc4", 00:16:52.317 "is_configured": true, 00:16:52.317 "data_offset": 2048, 00:16:52.317 "data_size": 63488 00:16:52.317 }, 00:16:52.317 { 00:16:52.317 "name": "BaseBdev2", 00:16:52.317 "uuid": "b9ea7341-daf6-5a4d-aa61-bf10a6d7449d", 00:16:52.317 "is_configured": true, 00:16:52.317 "data_offset": 2048, 00:16:52.317 "data_size": 63488 00:16:52.317 }, 00:16:52.317 { 00:16:52.317 "name": "BaseBdev3", 00:16:52.317 "uuid": "e8c54f98-d470-506c-a8ac-84bbf50a5152", 00:16:52.317 "is_configured": true, 00:16:52.317 "data_offset": 2048, 00:16:52.317 "data_size": 63488 00:16:52.317 }, 00:16:52.317 { 00:16:52.317 "name": "BaseBdev4", 00:16:52.317 "uuid": "69c62172-6a25-5b33-bd34-bacaaf3d1e51", 00:16:52.317 "is_configured": true, 00:16:52.317 "data_offset": 2048, 00:16:52.317 "data_size": 63488 00:16:52.317 } 00:16:52.317 ] 00:16:52.317 }' 00:16:52.317 15:24:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:52.317 15:24:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:52.317 15:24:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:52.317 15:24:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:52.317 15:24:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:53.256 15:24:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:53.256 15:24:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:53.256 15:24:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:53.256 15:24:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:53.256 15:24:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:53.256 15:24:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:53.256 15:24:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:53.256 15:24:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.256 15:24:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:53.256 15:24:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:53.516 15:24:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.516 15:24:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:53.516 "name": "raid_bdev1", 00:16:53.516 "uuid": "8e6f522e-374c-4b6e-afac-a6463c5d6bfb", 00:16:53.516 "strip_size_kb": 64, 00:16:53.516 "state": "online", 00:16:53.516 "raid_level": "raid5f", 00:16:53.516 "superblock": true, 00:16:53.516 "num_base_bdevs": 4, 00:16:53.516 "num_base_bdevs_discovered": 4, 00:16:53.516 "num_base_bdevs_operational": 4, 00:16:53.516 "process": { 00:16:53.516 "type": "rebuild", 00:16:53.516 "target": "spare", 00:16:53.516 "progress": { 00:16:53.516 "blocks": 170880, 00:16:53.516 "percent": 89 00:16:53.516 } 00:16:53.516 }, 00:16:53.516 "base_bdevs_list": [ 00:16:53.516 { 00:16:53.516 "name": "spare", 00:16:53.516 "uuid": "d95caf0f-3e00-59e2-97e9-97993e864fc4", 00:16:53.516 "is_configured": true, 00:16:53.516 "data_offset": 2048, 00:16:53.516 "data_size": 63488 00:16:53.516 }, 00:16:53.516 { 00:16:53.516 "name": "BaseBdev2", 00:16:53.516 "uuid": "b9ea7341-daf6-5a4d-aa61-bf10a6d7449d", 00:16:53.516 "is_configured": true, 00:16:53.516 "data_offset": 2048, 00:16:53.516 "data_size": 63488 00:16:53.516 }, 00:16:53.516 { 00:16:53.516 "name": "BaseBdev3", 00:16:53.516 "uuid": "e8c54f98-d470-506c-a8ac-84bbf50a5152", 00:16:53.516 "is_configured": true, 00:16:53.516 "data_offset": 2048, 00:16:53.516 "data_size": 63488 00:16:53.516 }, 00:16:53.516 { 00:16:53.516 "name": "BaseBdev4", 00:16:53.516 "uuid": "69c62172-6a25-5b33-bd34-bacaaf3d1e51", 00:16:53.516 "is_configured": true, 00:16:53.516 "data_offset": 2048, 00:16:53.516 "data_size": 63488 00:16:53.516 } 00:16:53.516 ] 00:16:53.516 }' 00:16:53.516 15:24:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:53.516 15:24:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:53.516 15:24:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:53.516 15:24:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:53.516 15:24:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:54.454 [2024-11-20 15:24:40.790574] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:16:54.454 [2024-11-20 15:24:40.790691] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:16:54.454 [2024-11-20 15:24:40.790878] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:54.454 15:24:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:54.454 15:24:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:54.454 15:24:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:54.454 15:24:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:54.454 15:24:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:54.454 15:24:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:54.454 15:24:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:54.454 15:24:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:54.454 15:24:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.454 15:24:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:54.454 15:24:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.454 15:24:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:54.454 "name": "raid_bdev1", 00:16:54.454 "uuid": "8e6f522e-374c-4b6e-afac-a6463c5d6bfb", 00:16:54.454 "strip_size_kb": 64, 00:16:54.454 "state": "online", 00:16:54.454 "raid_level": "raid5f", 00:16:54.454 "superblock": true, 00:16:54.454 "num_base_bdevs": 4, 00:16:54.454 "num_base_bdevs_discovered": 4, 00:16:54.454 "num_base_bdevs_operational": 4, 00:16:54.454 "base_bdevs_list": [ 00:16:54.454 { 00:16:54.454 "name": "spare", 00:16:54.454 "uuid": "d95caf0f-3e00-59e2-97e9-97993e864fc4", 00:16:54.454 "is_configured": true, 00:16:54.454 "data_offset": 2048, 00:16:54.454 "data_size": 63488 00:16:54.454 }, 00:16:54.454 { 00:16:54.454 "name": "BaseBdev2", 00:16:54.454 "uuid": "b9ea7341-daf6-5a4d-aa61-bf10a6d7449d", 00:16:54.454 "is_configured": true, 00:16:54.454 "data_offset": 2048, 00:16:54.454 "data_size": 63488 00:16:54.454 }, 00:16:54.454 { 00:16:54.454 "name": "BaseBdev3", 00:16:54.454 "uuid": "e8c54f98-d470-506c-a8ac-84bbf50a5152", 00:16:54.454 "is_configured": true, 00:16:54.454 "data_offset": 2048, 00:16:54.454 "data_size": 63488 00:16:54.454 }, 00:16:54.454 { 00:16:54.454 "name": "BaseBdev4", 00:16:54.454 "uuid": "69c62172-6a25-5b33-bd34-bacaaf3d1e51", 00:16:54.454 "is_configured": true, 00:16:54.454 "data_offset": 2048, 00:16:54.454 "data_size": 63488 00:16:54.454 } 00:16:54.454 ] 00:16:54.454 }' 00:16:54.454 15:24:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:54.714 15:24:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:16:54.714 15:24:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:54.714 15:24:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:16:54.714 15:24:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:16:54.714 15:24:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:54.714 15:24:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:54.714 15:24:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:54.714 15:24:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:54.714 15:24:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:54.714 15:24:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:54.714 15:24:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.714 15:24:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:54.714 15:24:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:54.714 15:24:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.714 15:24:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:54.714 "name": "raid_bdev1", 00:16:54.714 "uuid": "8e6f522e-374c-4b6e-afac-a6463c5d6bfb", 00:16:54.714 "strip_size_kb": 64, 00:16:54.714 "state": "online", 00:16:54.714 "raid_level": "raid5f", 00:16:54.714 "superblock": true, 00:16:54.714 "num_base_bdevs": 4, 00:16:54.714 "num_base_bdevs_discovered": 4, 00:16:54.714 "num_base_bdevs_operational": 4, 00:16:54.714 "base_bdevs_list": [ 00:16:54.714 { 00:16:54.714 "name": "spare", 00:16:54.714 "uuid": "d95caf0f-3e00-59e2-97e9-97993e864fc4", 00:16:54.714 "is_configured": true, 00:16:54.714 "data_offset": 2048, 00:16:54.714 "data_size": 63488 00:16:54.714 }, 00:16:54.714 { 00:16:54.714 "name": "BaseBdev2", 00:16:54.714 "uuid": "b9ea7341-daf6-5a4d-aa61-bf10a6d7449d", 00:16:54.714 "is_configured": true, 00:16:54.714 "data_offset": 2048, 00:16:54.714 "data_size": 63488 00:16:54.714 }, 00:16:54.714 { 00:16:54.714 "name": "BaseBdev3", 00:16:54.714 "uuid": "e8c54f98-d470-506c-a8ac-84bbf50a5152", 00:16:54.714 "is_configured": true, 00:16:54.714 "data_offset": 2048, 00:16:54.714 "data_size": 63488 00:16:54.714 }, 00:16:54.714 { 00:16:54.714 "name": "BaseBdev4", 00:16:54.714 "uuid": "69c62172-6a25-5b33-bd34-bacaaf3d1e51", 00:16:54.714 "is_configured": true, 00:16:54.714 "data_offset": 2048, 00:16:54.714 "data_size": 63488 00:16:54.714 } 00:16:54.714 ] 00:16:54.714 }' 00:16:54.714 15:24:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:54.714 15:24:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:54.715 15:24:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:54.715 15:24:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:54.715 15:24:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:16:54.715 15:24:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:54.715 15:24:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:54.715 15:24:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:54.715 15:24:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:54.715 15:24:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:54.715 15:24:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:54.715 15:24:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:54.715 15:24:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:54.715 15:24:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:54.715 15:24:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:54.715 15:24:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:54.715 15:24:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.715 15:24:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:54.715 15:24:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.715 15:24:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:54.715 "name": "raid_bdev1", 00:16:54.715 "uuid": "8e6f522e-374c-4b6e-afac-a6463c5d6bfb", 00:16:54.715 "strip_size_kb": 64, 00:16:54.715 "state": "online", 00:16:54.715 "raid_level": "raid5f", 00:16:54.715 "superblock": true, 00:16:54.715 "num_base_bdevs": 4, 00:16:54.715 "num_base_bdevs_discovered": 4, 00:16:54.715 "num_base_bdevs_operational": 4, 00:16:54.715 "base_bdevs_list": [ 00:16:54.715 { 00:16:54.715 "name": "spare", 00:16:54.715 "uuid": "d95caf0f-3e00-59e2-97e9-97993e864fc4", 00:16:54.715 "is_configured": true, 00:16:54.715 "data_offset": 2048, 00:16:54.715 "data_size": 63488 00:16:54.715 }, 00:16:54.715 { 00:16:54.715 "name": "BaseBdev2", 00:16:54.715 "uuid": "b9ea7341-daf6-5a4d-aa61-bf10a6d7449d", 00:16:54.715 "is_configured": true, 00:16:54.715 "data_offset": 2048, 00:16:54.715 "data_size": 63488 00:16:54.715 }, 00:16:54.715 { 00:16:54.715 "name": "BaseBdev3", 00:16:54.715 "uuid": "e8c54f98-d470-506c-a8ac-84bbf50a5152", 00:16:54.715 "is_configured": true, 00:16:54.715 "data_offset": 2048, 00:16:54.715 "data_size": 63488 00:16:54.715 }, 00:16:54.715 { 00:16:54.715 "name": "BaseBdev4", 00:16:54.715 "uuid": "69c62172-6a25-5b33-bd34-bacaaf3d1e51", 00:16:54.715 "is_configured": true, 00:16:54.715 "data_offset": 2048, 00:16:54.715 "data_size": 63488 00:16:54.715 } 00:16:54.715 ] 00:16:54.715 }' 00:16:54.715 15:24:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:54.715 15:24:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:55.284 15:24:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:55.284 15:24:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.284 15:24:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:55.284 [2024-11-20 15:24:41.539134] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:55.284 [2024-11-20 15:24:41.539176] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:55.284 [2024-11-20 15:24:41.539264] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:55.284 [2024-11-20 15:24:41.539366] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:55.284 [2024-11-20 15:24:41.539390] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:55.284 15:24:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.284 15:24:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:55.284 15:24:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.284 15:24:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:55.284 15:24:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:16:55.284 15:24:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.284 15:24:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:16:55.284 15:24:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:16:55.284 15:24:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:16:55.284 15:24:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:16:55.284 15:24:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:55.284 15:24:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:16:55.284 15:24:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:55.284 15:24:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:55.284 15:24:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:55.284 15:24:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:16:55.284 15:24:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:55.285 15:24:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:55.285 15:24:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:16:55.548 /dev/nbd0 00:16:55.548 15:24:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:55.548 15:24:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:55.548 15:24:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:55.548 15:24:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:16:55.548 15:24:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:55.548 15:24:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:55.548 15:24:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:55.548 15:24:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:16:55.548 15:24:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:55.548 15:24:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:55.548 15:24:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:55.548 1+0 records in 00:16:55.548 1+0 records out 00:16:55.548 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000305934 s, 13.4 MB/s 00:16:55.548 15:24:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:55.548 15:24:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:16:55.548 15:24:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:55.548 15:24:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:55.548 15:24:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:16:55.548 15:24:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:55.548 15:24:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:55.548 15:24:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:16:55.807 /dev/nbd1 00:16:55.807 15:24:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:55.807 15:24:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:55.807 15:24:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:16:55.807 15:24:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:16:55.807 15:24:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:55.807 15:24:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:55.807 15:24:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:16:55.807 15:24:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:16:55.807 15:24:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:55.807 15:24:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:55.807 15:24:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:55.807 1+0 records in 00:16:55.807 1+0 records out 00:16:55.807 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000314403 s, 13.0 MB/s 00:16:55.807 15:24:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:55.807 15:24:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:16:55.807 15:24:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:55.807 15:24:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:55.807 15:24:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:16:55.807 15:24:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:55.807 15:24:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:55.807 15:24:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:16:56.066 15:24:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:16:56.066 15:24:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:56.066 15:24:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:56.066 15:24:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:56.066 15:24:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:16:56.066 15:24:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:56.066 15:24:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:56.066 15:24:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:56.066 15:24:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:56.066 15:24:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:56.066 15:24:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:56.066 15:24:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:56.066 15:24:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:56.066 15:24:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:16:56.066 15:24:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:16:56.066 15:24:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:56.066 15:24:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:56.326 15:24:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:56.326 15:24:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:56.326 15:24:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:56.326 15:24:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:56.326 15:24:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:56.326 15:24:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:56.326 15:24:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:16:56.326 15:24:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:16:56.326 15:24:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:16:56.326 15:24:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:16:56.326 15:24:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.326 15:24:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:56.586 15:24:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.586 15:24:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:56.586 15:24:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.586 15:24:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:56.586 [2024-11-20 15:24:42.813592] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:56.586 [2024-11-20 15:24:42.813683] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:56.586 [2024-11-20 15:24:42.813727] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:16:56.586 [2024-11-20 15:24:42.813740] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:56.586 [2024-11-20 15:24:42.816627] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:56.586 [2024-11-20 15:24:42.816677] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:56.586 [2024-11-20 15:24:42.816782] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:56.586 [2024-11-20 15:24:42.816840] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:56.586 [2024-11-20 15:24:42.817017] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:56.586 [2024-11-20 15:24:42.817116] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:56.586 [2024-11-20 15:24:42.817200] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:56.586 spare 00:16:56.586 15:24:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.586 15:24:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:16:56.586 15:24:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.586 15:24:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:56.586 [2024-11-20 15:24:42.917142] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:16:56.586 [2024-11-20 15:24:42.917214] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:56.586 [2024-11-20 15:24:42.917571] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000491d0 00:16:56.586 [2024-11-20 15:24:42.925582] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:16:56.586 [2024-11-20 15:24:42.925616] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:16:56.586 [2024-11-20 15:24:42.925891] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:56.586 15:24:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.586 15:24:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:16:56.586 15:24:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:56.586 15:24:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:56.586 15:24:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:56.586 15:24:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:56.586 15:24:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:56.586 15:24:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:56.586 15:24:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:56.586 15:24:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:56.586 15:24:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:56.586 15:24:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:56.586 15:24:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:56.586 15:24:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.586 15:24:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:56.586 15:24:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.586 15:24:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:56.586 "name": "raid_bdev1", 00:16:56.586 "uuid": "8e6f522e-374c-4b6e-afac-a6463c5d6bfb", 00:16:56.586 "strip_size_kb": 64, 00:16:56.586 "state": "online", 00:16:56.586 "raid_level": "raid5f", 00:16:56.586 "superblock": true, 00:16:56.586 "num_base_bdevs": 4, 00:16:56.586 "num_base_bdevs_discovered": 4, 00:16:56.586 "num_base_bdevs_operational": 4, 00:16:56.586 "base_bdevs_list": [ 00:16:56.586 { 00:16:56.586 "name": "spare", 00:16:56.586 "uuid": "d95caf0f-3e00-59e2-97e9-97993e864fc4", 00:16:56.586 "is_configured": true, 00:16:56.586 "data_offset": 2048, 00:16:56.586 "data_size": 63488 00:16:56.586 }, 00:16:56.586 { 00:16:56.586 "name": "BaseBdev2", 00:16:56.586 "uuid": "b9ea7341-daf6-5a4d-aa61-bf10a6d7449d", 00:16:56.586 "is_configured": true, 00:16:56.586 "data_offset": 2048, 00:16:56.586 "data_size": 63488 00:16:56.587 }, 00:16:56.587 { 00:16:56.587 "name": "BaseBdev3", 00:16:56.587 "uuid": "e8c54f98-d470-506c-a8ac-84bbf50a5152", 00:16:56.587 "is_configured": true, 00:16:56.587 "data_offset": 2048, 00:16:56.587 "data_size": 63488 00:16:56.587 }, 00:16:56.587 { 00:16:56.587 "name": "BaseBdev4", 00:16:56.587 "uuid": "69c62172-6a25-5b33-bd34-bacaaf3d1e51", 00:16:56.587 "is_configured": true, 00:16:56.587 "data_offset": 2048, 00:16:56.587 "data_size": 63488 00:16:56.587 } 00:16:56.587 ] 00:16:56.587 }' 00:16:56.587 15:24:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:56.587 15:24:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:57.156 15:24:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:57.156 15:24:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:57.156 15:24:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:57.156 15:24:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:57.156 15:24:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:57.156 15:24:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:57.156 15:24:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:57.156 15:24:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.156 15:24:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:57.156 15:24:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.156 15:24:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:57.156 "name": "raid_bdev1", 00:16:57.156 "uuid": "8e6f522e-374c-4b6e-afac-a6463c5d6bfb", 00:16:57.156 "strip_size_kb": 64, 00:16:57.156 "state": "online", 00:16:57.156 "raid_level": "raid5f", 00:16:57.156 "superblock": true, 00:16:57.156 "num_base_bdevs": 4, 00:16:57.156 "num_base_bdevs_discovered": 4, 00:16:57.156 "num_base_bdevs_operational": 4, 00:16:57.156 "base_bdevs_list": [ 00:16:57.156 { 00:16:57.156 "name": "spare", 00:16:57.156 "uuid": "d95caf0f-3e00-59e2-97e9-97993e864fc4", 00:16:57.156 "is_configured": true, 00:16:57.156 "data_offset": 2048, 00:16:57.156 "data_size": 63488 00:16:57.156 }, 00:16:57.156 { 00:16:57.156 "name": "BaseBdev2", 00:16:57.156 "uuid": "b9ea7341-daf6-5a4d-aa61-bf10a6d7449d", 00:16:57.156 "is_configured": true, 00:16:57.156 "data_offset": 2048, 00:16:57.156 "data_size": 63488 00:16:57.156 }, 00:16:57.156 { 00:16:57.156 "name": "BaseBdev3", 00:16:57.156 "uuid": "e8c54f98-d470-506c-a8ac-84bbf50a5152", 00:16:57.156 "is_configured": true, 00:16:57.156 "data_offset": 2048, 00:16:57.156 "data_size": 63488 00:16:57.156 }, 00:16:57.156 { 00:16:57.156 "name": "BaseBdev4", 00:16:57.156 "uuid": "69c62172-6a25-5b33-bd34-bacaaf3d1e51", 00:16:57.156 "is_configured": true, 00:16:57.156 "data_offset": 2048, 00:16:57.156 "data_size": 63488 00:16:57.156 } 00:16:57.156 ] 00:16:57.156 }' 00:16:57.156 15:24:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:57.156 15:24:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:57.156 15:24:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:57.156 15:24:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:57.156 15:24:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:57.156 15:24:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:16:57.156 15:24:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.156 15:24:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:57.156 15:24:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.156 15:24:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:16:57.156 15:24:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:57.156 15:24:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.156 15:24:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:57.156 [2024-11-20 15:24:43.493868] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:57.156 15:24:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.156 15:24:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:57.156 15:24:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:57.156 15:24:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:57.156 15:24:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:57.156 15:24:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:57.156 15:24:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:57.156 15:24:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:57.156 15:24:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:57.156 15:24:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:57.156 15:24:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:57.156 15:24:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:57.156 15:24:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:57.156 15:24:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.156 15:24:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:57.156 15:24:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.156 15:24:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:57.156 "name": "raid_bdev1", 00:16:57.156 "uuid": "8e6f522e-374c-4b6e-afac-a6463c5d6bfb", 00:16:57.156 "strip_size_kb": 64, 00:16:57.156 "state": "online", 00:16:57.156 "raid_level": "raid5f", 00:16:57.156 "superblock": true, 00:16:57.156 "num_base_bdevs": 4, 00:16:57.156 "num_base_bdevs_discovered": 3, 00:16:57.156 "num_base_bdevs_operational": 3, 00:16:57.156 "base_bdevs_list": [ 00:16:57.156 { 00:16:57.156 "name": null, 00:16:57.156 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:57.156 "is_configured": false, 00:16:57.156 "data_offset": 0, 00:16:57.156 "data_size": 63488 00:16:57.156 }, 00:16:57.156 { 00:16:57.156 "name": "BaseBdev2", 00:16:57.156 "uuid": "b9ea7341-daf6-5a4d-aa61-bf10a6d7449d", 00:16:57.156 "is_configured": true, 00:16:57.156 "data_offset": 2048, 00:16:57.156 "data_size": 63488 00:16:57.156 }, 00:16:57.156 { 00:16:57.156 "name": "BaseBdev3", 00:16:57.156 "uuid": "e8c54f98-d470-506c-a8ac-84bbf50a5152", 00:16:57.156 "is_configured": true, 00:16:57.156 "data_offset": 2048, 00:16:57.156 "data_size": 63488 00:16:57.156 }, 00:16:57.156 { 00:16:57.156 "name": "BaseBdev4", 00:16:57.156 "uuid": "69c62172-6a25-5b33-bd34-bacaaf3d1e51", 00:16:57.156 "is_configured": true, 00:16:57.156 "data_offset": 2048, 00:16:57.156 "data_size": 63488 00:16:57.156 } 00:16:57.156 ] 00:16:57.156 }' 00:16:57.156 15:24:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:57.156 15:24:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:57.416 15:24:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:57.416 15:24:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.416 15:24:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:57.416 [2024-11-20 15:24:43.885337] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:57.416 [2024-11-20 15:24:43.885531] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:16:57.416 [2024-11-20 15:24:43.885555] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:57.416 [2024-11-20 15:24:43.885600] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:57.674 [2024-11-20 15:24:43.901456] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000492a0 00:16:57.674 15:24:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.674 15:24:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:16:57.674 [2024-11-20 15:24:43.911288] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:58.611 15:24:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:58.611 15:24:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:58.611 15:24:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:58.611 15:24:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:58.611 15:24:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:58.611 15:24:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:58.611 15:24:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:58.611 15:24:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.611 15:24:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:58.611 15:24:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.611 15:24:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:58.611 "name": "raid_bdev1", 00:16:58.611 "uuid": "8e6f522e-374c-4b6e-afac-a6463c5d6bfb", 00:16:58.611 "strip_size_kb": 64, 00:16:58.611 "state": "online", 00:16:58.611 "raid_level": "raid5f", 00:16:58.611 "superblock": true, 00:16:58.611 "num_base_bdevs": 4, 00:16:58.611 "num_base_bdevs_discovered": 4, 00:16:58.611 "num_base_bdevs_operational": 4, 00:16:58.611 "process": { 00:16:58.612 "type": "rebuild", 00:16:58.612 "target": "spare", 00:16:58.612 "progress": { 00:16:58.612 "blocks": 19200, 00:16:58.612 "percent": 10 00:16:58.612 } 00:16:58.612 }, 00:16:58.612 "base_bdevs_list": [ 00:16:58.612 { 00:16:58.612 "name": "spare", 00:16:58.612 "uuid": "d95caf0f-3e00-59e2-97e9-97993e864fc4", 00:16:58.612 "is_configured": true, 00:16:58.612 "data_offset": 2048, 00:16:58.612 "data_size": 63488 00:16:58.612 }, 00:16:58.612 { 00:16:58.612 "name": "BaseBdev2", 00:16:58.612 "uuid": "b9ea7341-daf6-5a4d-aa61-bf10a6d7449d", 00:16:58.612 "is_configured": true, 00:16:58.612 "data_offset": 2048, 00:16:58.612 "data_size": 63488 00:16:58.612 }, 00:16:58.612 { 00:16:58.612 "name": "BaseBdev3", 00:16:58.612 "uuid": "e8c54f98-d470-506c-a8ac-84bbf50a5152", 00:16:58.612 "is_configured": true, 00:16:58.612 "data_offset": 2048, 00:16:58.612 "data_size": 63488 00:16:58.612 }, 00:16:58.612 { 00:16:58.612 "name": "BaseBdev4", 00:16:58.612 "uuid": "69c62172-6a25-5b33-bd34-bacaaf3d1e51", 00:16:58.612 "is_configured": true, 00:16:58.612 "data_offset": 2048, 00:16:58.612 "data_size": 63488 00:16:58.612 } 00:16:58.612 ] 00:16:58.612 }' 00:16:58.612 15:24:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:58.612 15:24:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:58.612 15:24:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:58.612 15:24:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:58.612 15:24:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:16:58.612 15:24:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.612 15:24:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:58.612 [2024-11-20 15:24:45.051199] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:58.871 [2024-11-20 15:24:45.120386] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:58.871 [2024-11-20 15:24:45.120499] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:58.871 [2024-11-20 15:24:45.120520] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:58.871 [2024-11-20 15:24:45.120532] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:58.871 15:24:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.871 15:24:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:58.871 15:24:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:58.871 15:24:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:58.871 15:24:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:58.871 15:24:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:58.871 15:24:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:58.871 15:24:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:58.871 15:24:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:58.871 15:24:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:58.871 15:24:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:58.871 15:24:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:58.871 15:24:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:58.871 15:24:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.871 15:24:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:58.871 15:24:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.871 15:24:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:58.871 "name": "raid_bdev1", 00:16:58.871 "uuid": "8e6f522e-374c-4b6e-afac-a6463c5d6bfb", 00:16:58.871 "strip_size_kb": 64, 00:16:58.871 "state": "online", 00:16:58.871 "raid_level": "raid5f", 00:16:58.871 "superblock": true, 00:16:58.871 "num_base_bdevs": 4, 00:16:58.871 "num_base_bdevs_discovered": 3, 00:16:58.871 "num_base_bdevs_operational": 3, 00:16:58.871 "base_bdevs_list": [ 00:16:58.871 { 00:16:58.871 "name": null, 00:16:58.871 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:58.871 "is_configured": false, 00:16:58.871 "data_offset": 0, 00:16:58.871 "data_size": 63488 00:16:58.871 }, 00:16:58.871 { 00:16:58.871 "name": "BaseBdev2", 00:16:58.871 "uuid": "b9ea7341-daf6-5a4d-aa61-bf10a6d7449d", 00:16:58.871 "is_configured": true, 00:16:58.871 "data_offset": 2048, 00:16:58.871 "data_size": 63488 00:16:58.871 }, 00:16:58.871 { 00:16:58.871 "name": "BaseBdev3", 00:16:58.871 "uuid": "e8c54f98-d470-506c-a8ac-84bbf50a5152", 00:16:58.871 "is_configured": true, 00:16:58.871 "data_offset": 2048, 00:16:58.871 "data_size": 63488 00:16:58.871 }, 00:16:58.871 { 00:16:58.871 "name": "BaseBdev4", 00:16:58.871 "uuid": "69c62172-6a25-5b33-bd34-bacaaf3d1e51", 00:16:58.872 "is_configured": true, 00:16:58.872 "data_offset": 2048, 00:16:58.872 "data_size": 63488 00:16:58.872 } 00:16:58.872 ] 00:16:58.872 }' 00:16:58.872 15:24:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:58.872 15:24:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:59.131 15:24:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:59.131 15:24:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.131 15:24:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:59.131 [2024-11-20 15:24:45.572602] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:59.131 [2024-11-20 15:24:45.572697] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:59.131 [2024-11-20 15:24:45.572728] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:16:59.131 [2024-11-20 15:24:45.572745] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:59.131 [2024-11-20 15:24:45.573297] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:59.131 [2024-11-20 15:24:45.573324] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:59.131 [2024-11-20 15:24:45.573430] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:59.131 [2024-11-20 15:24:45.573449] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:16:59.131 [2024-11-20 15:24:45.573462] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:59.131 [2024-11-20 15:24:45.573495] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:59.131 [2024-11-20 15:24:45.589449] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000049370 00:16:59.131 spare 00:16:59.131 15:24:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.131 15:24:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:16:59.131 [2024-11-20 15:24:45.599923] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:00.521 15:24:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:00.521 15:24:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:00.521 15:24:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:00.521 15:24:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:00.521 15:24:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:00.521 15:24:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:00.521 15:24:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:00.521 15:24:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.521 15:24:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:00.521 15:24:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.521 15:24:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:00.521 "name": "raid_bdev1", 00:17:00.521 "uuid": "8e6f522e-374c-4b6e-afac-a6463c5d6bfb", 00:17:00.521 "strip_size_kb": 64, 00:17:00.521 "state": "online", 00:17:00.521 "raid_level": "raid5f", 00:17:00.521 "superblock": true, 00:17:00.521 "num_base_bdevs": 4, 00:17:00.521 "num_base_bdevs_discovered": 4, 00:17:00.521 "num_base_bdevs_operational": 4, 00:17:00.521 "process": { 00:17:00.521 "type": "rebuild", 00:17:00.521 "target": "spare", 00:17:00.521 "progress": { 00:17:00.521 "blocks": 19200, 00:17:00.521 "percent": 10 00:17:00.521 } 00:17:00.521 }, 00:17:00.521 "base_bdevs_list": [ 00:17:00.521 { 00:17:00.521 "name": "spare", 00:17:00.521 "uuid": "d95caf0f-3e00-59e2-97e9-97993e864fc4", 00:17:00.522 "is_configured": true, 00:17:00.522 "data_offset": 2048, 00:17:00.522 "data_size": 63488 00:17:00.522 }, 00:17:00.522 { 00:17:00.522 "name": "BaseBdev2", 00:17:00.522 "uuid": "b9ea7341-daf6-5a4d-aa61-bf10a6d7449d", 00:17:00.522 "is_configured": true, 00:17:00.522 "data_offset": 2048, 00:17:00.522 "data_size": 63488 00:17:00.522 }, 00:17:00.522 { 00:17:00.522 "name": "BaseBdev3", 00:17:00.522 "uuid": "e8c54f98-d470-506c-a8ac-84bbf50a5152", 00:17:00.522 "is_configured": true, 00:17:00.522 "data_offset": 2048, 00:17:00.522 "data_size": 63488 00:17:00.522 }, 00:17:00.522 { 00:17:00.522 "name": "BaseBdev4", 00:17:00.522 "uuid": "69c62172-6a25-5b33-bd34-bacaaf3d1e51", 00:17:00.522 "is_configured": true, 00:17:00.522 "data_offset": 2048, 00:17:00.522 "data_size": 63488 00:17:00.522 } 00:17:00.522 ] 00:17:00.522 }' 00:17:00.522 15:24:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:00.522 15:24:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:00.522 15:24:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:00.522 15:24:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:00.522 15:24:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:17:00.522 15:24:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.522 15:24:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:00.522 [2024-11-20 15:24:46.727277] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:00.522 [2024-11-20 15:24:46.808746] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:00.522 [2024-11-20 15:24:46.808842] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:00.522 [2024-11-20 15:24:46.808866] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:00.522 [2024-11-20 15:24:46.808875] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:00.522 15:24:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.522 15:24:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:00.522 15:24:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:00.522 15:24:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:00.522 15:24:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:00.522 15:24:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:00.522 15:24:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:00.522 15:24:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:00.522 15:24:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:00.522 15:24:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:00.522 15:24:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:00.522 15:24:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:00.522 15:24:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:00.522 15:24:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.522 15:24:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:00.522 15:24:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.522 15:24:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:00.522 "name": "raid_bdev1", 00:17:00.522 "uuid": "8e6f522e-374c-4b6e-afac-a6463c5d6bfb", 00:17:00.522 "strip_size_kb": 64, 00:17:00.522 "state": "online", 00:17:00.522 "raid_level": "raid5f", 00:17:00.522 "superblock": true, 00:17:00.522 "num_base_bdevs": 4, 00:17:00.522 "num_base_bdevs_discovered": 3, 00:17:00.522 "num_base_bdevs_operational": 3, 00:17:00.522 "base_bdevs_list": [ 00:17:00.522 { 00:17:00.522 "name": null, 00:17:00.522 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:00.522 "is_configured": false, 00:17:00.522 "data_offset": 0, 00:17:00.522 "data_size": 63488 00:17:00.522 }, 00:17:00.522 { 00:17:00.522 "name": "BaseBdev2", 00:17:00.522 "uuid": "b9ea7341-daf6-5a4d-aa61-bf10a6d7449d", 00:17:00.522 "is_configured": true, 00:17:00.522 "data_offset": 2048, 00:17:00.522 "data_size": 63488 00:17:00.522 }, 00:17:00.522 { 00:17:00.522 "name": "BaseBdev3", 00:17:00.522 "uuid": "e8c54f98-d470-506c-a8ac-84bbf50a5152", 00:17:00.522 "is_configured": true, 00:17:00.522 "data_offset": 2048, 00:17:00.522 "data_size": 63488 00:17:00.522 }, 00:17:00.522 { 00:17:00.522 "name": "BaseBdev4", 00:17:00.522 "uuid": "69c62172-6a25-5b33-bd34-bacaaf3d1e51", 00:17:00.522 "is_configured": true, 00:17:00.522 "data_offset": 2048, 00:17:00.522 "data_size": 63488 00:17:00.522 } 00:17:00.522 ] 00:17:00.522 }' 00:17:00.522 15:24:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:00.522 15:24:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:00.780 15:24:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:00.780 15:24:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:00.780 15:24:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:00.780 15:24:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:00.780 15:24:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:00.780 15:24:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:00.780 15:24:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:00.780 15:24:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.780 15:24:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:01.039 15:24:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.039 15:24:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:01.039 "name": "raid_bdev1", 00:17:01.039 "uuid": "8e6f522e-374c-4b6e-afac-a6463c5d6bfb", 00:17:01.039 "strip_size_kb": 64, 00:17:01.039 "state": "online", 00:17:01.039 "raid_level": "raid5f", 00:17:01.039 "superblock": true, 00:17:01.039 "num_base_bdevs": 4, 00:17:01.039 "num_base_bdevs_discovered": 3, 00:17:01.039 "num_base_bdevs_operational": 3, 00:17:01.039 "base_bdevs_list": [ 00:17:01.039 { 00:17:01.039 "name": null, 00:17:01.039 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:01.039 "is_configured": false, 00:17:01.039 "data_offset": 0, 00:17:01.039 "data_size": 63488 00:17:01.039 }, 00:17:01.039 { 00:17:01.039 "name": "BaseBdev2", 00:17:01.039 "uuid": "b9ea7341-daf6-5a4d-aa61-bf10a6d7449d", 00:17:01.039 "is_configured": true, 00:17:01.039 "data_offset": 2048, 00:17:01.039 "data_size": 63488 00:17:01.039 }, 00:17:01.039 { 00:17:01.039 "name": "BaseBdev3", 00:17:01.039 "uuid": "e8c54f98-d470-506c-a8ac-84bbf50a5152", 00:17:01.039 "is_configured": true, 00:17:01.039 "data_offset": 2048, 00:17:01.039 "data_size": 63488 00:17:01.039 }, 00:17:01.039 { 00:17:01.039 "name": "BaseBdev4", 00:17:01.039 "uuid": "69c62172-6a25-5b33-bd34-bacaaf3d1e51", 00:17:01.039 "is_configured": true, 00:17:01.039 "data_offset": 2048, 00:17:01.039 "data_size": 63488 00:17:01.039 } 00:17:01.039 ] 00:17:01.039 }' 00:17:01.039 15:24:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:01.039 15:24:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:01.039 15:24:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:01.039 15:24:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:01.039 15:24:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:17:01.039 15:24:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.039 15:24:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:01.039 15:24:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.039 15:24:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:01.039 15:24:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.039 15:24:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:01.039 [2024-11-20 15:24:47.372783] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:01.039 [2024-11-20 15:24:47.372858] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:01.039 [2024-11-20 15:24:47.372884] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:17:01.039 [2024-11-20 15:24:47.372896] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:01.039 [2024-11-20 15:24:47.373412] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:01.039 [2024-11-20 15:24:47.373433] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:01.039 [2024-11-20 15:24:47.373527] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:17:01.039 [2024-11-20 15:24:47.373543] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:01.039 [2024-11-20 15:24:47.373559] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:01.039 [2024-11-20 15:24:47.373572] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:17:01.039 BaseBdev1 00:17:01.039 15:24:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.039 15:24:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:17:01.974 15:24:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:01.974 15:24:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:01.974 15:24:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:01.974 15:24:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:01.974 15:24:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:01.974 15:24:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:01.974 15:24:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:01.974 15:24:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:01.974 15:24:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:01.974 15:24:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:01.974 15:24:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:01.974 15:24:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.974 15:24:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:01.974 15:24:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:01.974 15:24:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.974 15:24:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:01.974 "name": "raid_bdev1", 00:17:01.974 "uuid": "8e6f522e-374c-4b6e-afac-a6463c5d6bfb", 00:17:01.974 "strip_size_kb": 64, 00:17:01.974 "state": "online", 00:17:01.974 "raid_level": "raid5f", 00:17:01.974 "superblock": true, 00:17:01.974 "num_base_bdevs": 4, 00:17:01.974 "num_base_bdevs_discovered": 3, 00:17:01.974 "num_base_bdevs_operational": 3, 00:17:01.974 "base_bdevs_list": [ 00:17:01.974 { 00:17:01.974 "name": null, 00:17:01.974 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:01.974 "is_configured": false, 00:17:01.974 "data_offset": 0, 00:17:01.974 "data_size": 63488 00:17:01.974 }, 00:17:01.974 { 00:17:01.974 "name": "BaseBdev2", 00:17:01.974 "uuid": "b9ea7341-daf6-5a4d-aa61-bf10a6d7449d", 00:17:01.974 "is_configured": true, 00:17:01.974 "data_offset": 2048, 00:17:01.974 "data_size": 63488 00:17:01.974 }, 00:17:01.974 { 00:17:01.974 "name": "BaseBdev3", 00:17:01.974 "uuid": "e8c54f98-d470-506c-a8ac-84bbf50a5152", 00:17:01.974 "is_configured": true, 00:17:01.974 "data_offset": 2048, 00:17:01.974 "data_size": 63488 00:17:01.974 }, 00:17:01.974 { 00:17:01.974 "name": "BaseBdev4", 00:17:01.974 "uuid": "69c62172-6a25-5b33-bd34-bacaaf3d1e51", 00:17:01.974 "is_configured": true, 00:17:01.974 "data_offset": 2048, 00:17:01.974 "data_size": 63488 00:17:01.974 } 00:17:01.974 ] 00:17:01.974 }' 00:17:01.974 15:24:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:01.974 15:24:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:02.541 15:24:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:02.541 15:24:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:02.541 15:24:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:02.541 15:24:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:02.541 15:24:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:02.541 15:24:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:02.541 15:24:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.541 15:24:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:02.541 15:24:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:02.541 15:24:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.541 15:24:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:02.541 "name": "raid_bdev1", 00:17:02.541 "uuid": "8e6f522e-374c-4b6e-afac-a6463c5d6bfb", 00:17:02.541 "strip_size_kb": 64, 00:17:02.541 "state": "online", 00:17:02.541 "raid_level": "raid5f", 00:17:02.541 "superblock": true, 00:17:02.541 "num_base_bdevs": 4, 00:17:02.541 "num_base_bdevs_discovered": 3, 00:17:02.541 "num_base_bdevs_operational": 3, 00:17:02.541 "base_bdevs_list": [ 00:17:02.541 { 00:17:02.541 "name": null, 00:17:02.541 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:02.541 "is_configured": false, 00:17:02.541 "data_offset": 0, 00:17:02.541 "data_size": 63488 00:17:02.541 }, 00:17:02.541 { 00:17:02.541 "name": "BaseBdev2", 00:17:02.541 "uuid": "b9ea7341-daf6-5a4d-aa61-bf10a6d7449d", 00:17:02.541 "is_configured": true, 00:17:02.541 "data_offset": 2048, 00:17:02.541 "data_size": 63488 00:17:02.541 }, 00:17:02.541 { 00:17:02.541 "name": "BaseBdev3", 00:17:02.541 "uuid": "e8c54f98-d470-506c-a8ac-84bbf50a5152", 00:17:02.541 "is_configured": true, 00:17:02.541 "data_offset": 2048, 00:17:02.541 "data_size": 63488 00:17:02.541 }, 00:17:02.541 { 00:17:02.541 "name": "BaseBdev4", 00:17:02.541 "uuid": "69c62172-6a25-5b33-bd34-bacaaf3d1e51", 00:17:02.541 "is_configured": true, 00:17:02.541 "data_offset": 2048, 00:17:02.541 "data_size": 63488 00:17:02.541 } 00:17:02.541 ] 00:17:02.541 }' 00:17:02.541 15:24:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:02.541 15:24:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:02.541 15:24:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:02.541 15:24:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:02.541 15:24:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:02.541 15:24:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:17:02.541 15:24:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:02.541 15:24:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:02.541 15:24:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:02.541 15:24:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:02.541 15:24:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:02.541 15:24:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:02.541 15:24:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.541 15:24:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:02.541 [2024-11-20 15:24:48.922990] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:02.542 [2024-11-20 15:24:48.923167] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:02.542 [2024-11-20 15:24:48.923185] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:02.542 request: 00:17:02.542 { 00:17:02.542 "base_bdev": "BaseBdev1", 00:17:02.542 "raid_bdev": "raid_bdev1", 00:17:02.542 "method": "bdev_raid_add_base_bdev", 00:17:02.542 "req_id": 1 00:17:02.542 } 00:17:02.542 Got JSON-RPC error response 00:17:02.542 response: 00:17:02.542 { 00:17:02.542 "code": -22, 00:17:02.542 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:17:02.542 } 00:17:02.542 15:24:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:02.542 15:24:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:17:02.542 15:24:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:02.542 15:24:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:02.542 15:24:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:02.542 15:24:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:17:03.483 15:24:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:03.483 15:24:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:03.483 15:24:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:03.483 15:24:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:03.483 15:24:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:03.483 15:24:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:03.483 15:24:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:03.483 15:24:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:03.483 15:24:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:03.483 15:24:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:03.483 15:24:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:03.483 15:24:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.483 15:24:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:03.483 15:24:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:03.743 15:24:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.743 15:24:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:03.743 "name": "raid_bdev1", 00:17:03.743 "uuid": "8e6f522e-374c-4b6e-afac-a6463c5d6bfb", 00:17:03.743 "strip_size_kb": 64, 00:17:03.743 "state": "online", 00:17:03.743 "raid_level": "raid5f", 00:17:03.743 "superblock": true, 00:17:03.743 "num_base_bdevs": 4, 00:17:03.743 "num_base_bdevs_discovered": 3, 00:17:03.743 "num_base_bdevs_operational": 3, 00:17:03.743 "base_bdevs_list": [ 00:17:03.743 { 00:17:03.743 "name": null, 00:17:03.743 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:03.743 "is_configured": false, 00:17:03.743 "data_offset": 0, 00:17:03.743 "data_size": 63488 00:17:03.743 }, 00:17:03.743 { 00:17:03.743 "name": "BaseBdev2", 00:17:03.743 "uuid": "b9ea7341-daf6-5a4d-aa61-bf10a6d7449d", 00:17:03.743 "is_configured": true, 00:17:03.743 "data_offset": 2048, 00:17:03.743 "data_size": 63488 00:17:03.743 }, 00:17:03.743 { 00:17:03.743 "name": "BaseBdev3", 00:17:03.743 "uuid": "e8c54f98-d470-506c-a8ac-84bbf50a5152", 00:17:03.743 "is_configured": true, 00:17:03.743 "data_offset": 2048, 00:17:03.743 "data_size": 63488 00:17:03.743 }, 00:17:03.743 { 00:17:03.743 "name": "BaseBdev4", 00:17:03.743 "uuid": "69c62172-6a25-5b33-bd34-bacaaf3d1e51", 00:17:03.743 "is_configured": true, 00:17:03.743 "data_offset": 2048, 00:17:03.743 "data_size": 63488 00:17:03.743 } 00:17:03.743 ] 00:17:03.743 }' 00:17:03.743 15:24:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:03.743 15:24:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:04.002 15:24:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:04.002 15:24:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:04.002 15:24:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:04.002 15:24:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:04.002 15:24:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:04.002 15:24:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:04.002 15:24:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:04.002 15:24:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.002 15:24:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:04.002 15:24:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.002 15:24:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:04.002 "name": "raid_bdev1", 00:17:04.002 "uuid": "8e6f522e-374c-4b6e-afac-a6463c5d6bfb", 00:17:04.002 "strip_size_kb": 64, 00:17:04.002 "state": "online", 00:17:04.002 "raid_level": "raid5f", 00:17:04.002 "superblock": true, 00:17:04.002 "num_base_bdevs": 4, 00:17:04.002 "num_base_bdevs_discovered": 3, 00:17:04.002 "num_base_bdevs_operational": 3, 00:17:04.002 "base_bdevs_list": [ 00:17:04.002 { 00:17:04.002 "name": null, 00:17:04.002 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:04.002 "is_configured": false, 00:17:04.002 "data_offset": 0, 00:17:04.002 "data_size": 63488 00:17:04.002 }, 00:17:04.002 { 00:17:04.002 "name": "BaseBdev2", 00:17:04.002 "uuid": "b9ea7341-daf6-5a4d-aa61-bf10a6d7449d", 00:17:04.002 "is_configured": true, 00:17:04.002 "data_offset": 2048, 00:17:04.002 "data_size": 63488 00:17:04.002 }, 00:17:04.002 { 00:17:04.002 "name": "BaseBdev3", 00:17:04.002 "uuid": "e8c54f98-d470-506c-a8ac-84bbf50a5152", 00:17:04.002 "is_configured": true, 00:17:04.002 "data_offset": 2048, 00:17:04.002 "data_size": 63488 00:17:04.002 }, 00:17:04.002 { 00:17:04.002 "name": "BaseBdev4", 00:17:04.002 "uuid": "69c62172-6a25-5b33-bd34-bacaaf3d1e51", 00:17:04.002 "is_configured": true, 00:17:04.002 "data_offset": 2048, 00:17:04.002 "data_size": 63488 00:17:04.002 } 00:17:04.002 ] 00:17:04.002 }' 00:17:04.002 15:24:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:04.002 15:24:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:04.003 15:24:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:04.264 15:24:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:04.264 15:24:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 84924 00:17:04.264 15:24:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 84924 ']' 00:17:04.264 15:24:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 84924 00:17:04.264 15:24:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:17:04.264 15:24:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:04.264 15:24:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84924 00:17:04.264 15:24:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:04.264 killing process with pid 84924 00:17:04.264 Received shutdown signal, test time was about 60.000000 seconds 00:17:04.264 00:17:04.264 Latency(us) 00:17:04.264 [2024-11-20T15:24:50.746Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:04.264 [2024-11-20T15:24:50.746Z] =================================================================================================================== 00:17:04.264 [2024-11-20T15:24:50.746Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:04.264 15:24:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:04.264 15:24:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84924' 00:17:04.264 15:24:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 84924 00:17:04.264 [2024-11-20 15:24:50.548586] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:04.264 15:24:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 84924 00:17:04.264 [2024-11-20 15:24:50.548739] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:04.264 [2024-11-20 15:24:50.548823] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:04.264 [2024-11-20 15:24:50.548839] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:17:04.833 [2024-11-20 15:24:51.039445] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:05.771 15:24:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:17:05.771 00:17:05.771 real 0m26.481s 00:17:05.771 user 0m32.825s 00:17:05.771 sys 0m3.218s 00:17:05.771 ************************************ 00:17:05.771 END TEST raid5f_rebuild_test_sb 00:17:05.771 ************************************ 00:17:05.771 15:24:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:05.771 15:24:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:05.771 15:24:52 bdev_raid -- bdev/bdev_raid.sh@995 -- # base_blocklen=4096 00:17:05.771 15:24:52 bdev_raid -- bdev/bdev_raid.sh@997 -- # run_test raid_state_function_test_sb_4k raid_state_function_test raid1 2 true 00:17:05.771 15:24:52 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:17:05.771 15:24:52 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:05.771 15:24:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:06.030 ************************************ 00:17:06.030 START TEST raid_state_function_test_sb_4k 00:17:06.030 ************************************ 00:17:06.030 15:24:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:17:06.030 15:24:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:17:06.031 15:24:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:17:06.031 15:24:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:17:06.031 15:24:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:17:06.031 15:24:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:17:06.031 15:24:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:06.031 15:24:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:17:06.031 15:24:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:06.031 15:24:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:06.031 15:24:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:17:06.031 15:24:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:06.031 15:24:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:06.031 15:24:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:06.031 15:24:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:17:06.031 15:24:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:17:06.031 15:24:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # local strip_size 00:17:06.031 15:24:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:17:06.031 15:24:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:17:06.031 15:24:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:17:06.031 15:24:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:17:06.031 15:24:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:17:06.031 15:24:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:17:06.031 15:24:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@229 -- # raid_pid=85726 00:17:06.031 Process raid pid: 85726 00:17:06.031 15:24:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:17:06.031 15:24:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 85726' 00:17:06.031 15:24:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@231 -- # waitforlisten 85726 00:17:06.031 15:24:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@835 -- # '[' -z 85726 ']' 00:17:06.031 15:24:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:06.031 15:24:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:06.031 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:06.031 15:24:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:06.031 15:24:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:06.031 15:24:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:06.031 [2024-11-20 15:24:52.360510] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:17:06.031 [2024-11-20 15:24:52.360648] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:06.291 [2024-11-20 15:24:52.541135] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:06.291 [2024-11-20 15:24:52.665825] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:06.551 [2024-11-20 15:24:52.886407] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:06.551 [2024-11-20 15:24:52.886461] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:06.834 15:24:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:06.834 15:24:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@868 -- # return 0 00:17:06.834 15:24:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:06.834 15:24:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.834 15:24:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:06.834 [2024-11-20 15:24:53.217807] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:06.834 [2024-11-20 15:24:53.217869] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:06.834 [2024-11-20 15:24:53.217881] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:06.834 [2024-11-20 15:24:53.217894] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:06.834 15:24:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.834 15:24:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:06.834 15:24:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:06.834 15:24:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:06.834 15:24:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:06.834 15:24:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:06.834 15:24:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:06.834 15:24:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:06.834 15:24:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:06.834 15:24:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:06.834 15:24:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:06.834 15:24:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:06.834 15:24:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:06.834 15:24:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.834 15:24:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:06.834 15:24:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.834 15:24:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:06.834 "name": "Existed_Raid", 00:17:06.834 "uuid": "402628e3-6f17-44ca-af7f-31a2a30bfaeb", 00:17:06.834 "strip_size_kb": 0, 00:17:06.834 "state": "configuring", 00:17:06.834 "raid_level": "raid1", 00:17:06.834 "superblock": true, 00:17:06.834 "num_base_bdevs": 2, 00:17:06.834 "num_base_bdevs_discovered": 0, 00:17:06.834 "num_base_bdevs_operational": 2, 00:17:06.834 "base_bdevs_list": [ 00:17:06.834 { 00:17:06.834 "name": "BaseBdev1", 00:17:06.834 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:06.834 "is_configured": false, 00:17:06.834 "data_offset": 0, 00:17:06.834 "data_size": 0 00:17:06.834 }, 00:17:06.834 { 00:17:06.834 "name": "BaseBdev2", 00:17:06.834 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:06.834 "is_configured": false, 00:17:06.834 "data_offset": 0, 00:17:06.834 "data_size": 0 00:17:06.834 } 00:17:06.834 ] 00:17:06.834 }' 00:17:06.834 15:24:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:06.834 15:24:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:07.403 15:24:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:07.403 15:24:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.403 15:24:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:07.403 [2024-11-20 15:24:53.649118] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:07.403 [2024-11-20 15:24:53.649308] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:17:07.403 15:24:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.403 15:24:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:07.403 15:24:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.403 15:24:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:07.403 [2024-11-20 15:24:53.661100] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:07.403 [2024-11-20 15:24:53.661305] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:07.403 [2024-11-20 15:24:53.661393] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:07.403 [2024-11-20 15:24:53.661421] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:07.403 15:24:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.403 15:24:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1 00:17:07.403 15:24:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.403 15:24:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:07.403 [2024-11-20 15:24:53.711826] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:07.403 BaseBdev1 00:17:07.403 15:24:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.403 15:24:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:17:07.403 15:24:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:17:07.404 15:24:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:07.404 15:24:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # local i 00:17:07.404 15:24:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:07.404 15:24:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:07.404 15:24:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:07.404 15:24:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.404 15:24:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:07.404 15:24:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.404 15:24:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:07.404 15:24:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.404 15:24:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:07.404 [ 00:17:07.404 { 00:17:07.404 "name": "BaseBdev1", 00:17:07.404 "aliases": [ 00:17:07.404 "5e529583-af83-48f6-a7fc-fb2f8c4272c4" 00:17:07.404 ], 00:17:07.404 "product_name": "Malloc disk", 00:17:07.404 "block_size": 4096, 00:17:07.404 "num_blocks": 8192, 00:17:07.404 "uuid": "5e529583-af83-48f6-a7fc-fb2f8c4272c4", 00:17:07.404 "assigned_rate_limits": { 00:17:07.404 "rw_ios_per_sec": 0, 00:17:07.404 "rw_mbytes_per_sec": 0, 00:17:07.404 "r_mbytes_per_sec": 0, 00:17:07.404 "w_mbytes_per_sec": 0 00:17:07.404 }, 00:17:07.404 "claimed": true, 00:17:07.404 "claim_type": "exclusive_write", 00:17:07.404 "zoned": false, 00:17:07.404 "supported_io_types": { 00:17:07.404 "read": true, 00:17:07.404 "write": true, 00:17:07.404 "unmap": true, 00:17:07.404 "flush": true, 00:17:07.404 "reset": true, 00:17:07.404 "nvme_admin": false, 00:17:07.404 "nvme_io": false, 00:17:07.404 "nvme_io_md": false, 00:17:07.404 "write_zeroes": true, 00:17:07.404 "zcopy": true, 00:17:07.404 "get_zone_info": false, 00:17:07.404 "zone_management": false, 00:17:07.404 "zone_append": false, 00:17:07.404 "compare": false, 00:17:07.404 "compare_and_write": false, 00:17:07.404 "abort": true, 00:17:07.404 "seek_hole": false, 00:17:07.404 "seek_data": false, 00:17:07.404 "copy": true, 00:17:07.404 "nvme_iov_md": false 00:17:07.404 }, 00:17:07.404 "memory_domains": [ 00:17:07.404 { 00:17:07.404 "dma_device_id": "system", 00:17:07.404 "dma_device_type": 1 00:17:07.404 }, 00:17:07.404 { 00:17:07.404 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:07.404 "dma_device_type": 2 00:17:07.404 } 00:17:07.404 ], 00:17:07.404 "driver_specific": {} 00:17:07.404 } 00:17:07.404 ] 00:17:07.404 15:24:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.404 15:24:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@911 -- # return 0 00:17:07.404 15:24:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:07.404 15:24:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:07.404 15:24:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:07.404 15:24:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:07.404 15:24:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:07.404 15:24:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:07.404 15:24:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:07.404 15:24:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:07.404 15:24:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:07.404 15:24:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:07.404 15:24:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:07.404 15:24:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:07.404 15:24:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.404 15:24:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:07.404 15:24:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.404 15:24:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:07.404 "name": "Existed_Raid", 00:17:07.404 "uuid": "3aa16210-0ff0-4d1c-b0c9-8854d6f96291", 00:17:07.404 "strip_size_kb": 0, 00:17:07.404 "state": "configuring", 00:17:07.404 "raid_level": "raid1", 00:17:07.404 "superblock": true, 00:17:07.404 "num_base_bdevs": 2, 00:17:07.404 "num_base_bdevs_discovered": 1, 00:17:07.404 "num_base_bdevs_operational": 2, 00:17:07.404 "base_bdevs_list": [ 00:17:07.404 { 00:17:07.404 "name": "BaseBdev1", 00:17:07.404 "uuid": "5e529583-af83-48f6-a7fc-fb2f8c4272c4", 00:17:07.404 "is_configured": true, 00:17:07.404 "data_offset": 256, 00:17:07.404 "data_size": 7936 00:17:07.404 }, 00:17:07.404 { 00:17:07.404 "name": "BaseBdev2", 00:17:07.404 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:07.404 "is_configured": false, 00:17:07.404 "data_offset": 0, 00:17:07.404 "data_size": 0 00:17:07.404 } 00:17:07.404 ] 00:17:07.404 }' 00:17:07.404 15:24:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:07.404 15:24:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:07.972 15:24:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:07.973 15:24:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.973 15:24:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:07.973 [2024-11-20 15:24:54.167268] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:07.973 [2024-11-20 15:24:54.167512] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:17:07.973 15:24:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.973 15:24:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:07.973 15:24:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.973 15:24:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:07.973 [2024-11-20 15:24:54.179323] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:07.973 [2024-11-20 15:24:54.181653] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:07.973 [2024-11-20 15:24:54.181845] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:07.973 15:24:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.973 15:24:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:17:07.973 15:24:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:07.973 15:24:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:07.973 15:24:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:07.973 15:24:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:07.973 15:24:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:07.973 15:24:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:07.973 15:24:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:07.973 15:24:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:07.973 15:24:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:07.973 15:24:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:07.973 15:24:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:07.973 15:24:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:07.973 15:24:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.973 15:24:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:07.973 15:24:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:07.973 15:24:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.973 15:24:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:07.973 "name": "Existed_Raid", 00:17:07.973 "uuid": "4402f688-f52f-4719-8e1e-3f0d2f4ea720", 00:17:07.973 "strip_size_kb": 0, 00:17:07.973 "state": "configuring", 00:17:07.973 "raid_level": "raid1", 00:17:07.973 "superblock": true, 00:17:07.973 "num_base_bdevs": 2, 00:17:07.973 "num_base_bdevs_discovered": 1, 00:17:07.973 "num_base_bdevs_operational": 2, 00:17:07.973 "base_bdevs_list": [ 00:17:07.973 { 00:17:07.973 "name": "BaseBdev1", 00:17:07.973 "uuid": "5e529583-af83-48f6-a7fc-fb2f8c4272c4", 00:17:07.973 "is_configured": true, 00:17:07.973 "data_offset": 256, 00:17:07.973 "data_size": 7936 00:17:07.973 }, 00:17:07.973 { 00:17:07.973 "name": "BaseBdev2", 00:17:07.973 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:07.973 "is_configured": false, 00:17:07.973 "data_offset": 0, 00:17:07.973 "data_size": 0 00:17:07.973 } 00:17:07.973 ] 00:17:07.973 }' 00:17:07.973 15:24:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:07.973 15:24:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:08.231 15:24:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2 00:17:08.231 15:24:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.231 15:24:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:08.231 [2024-11-20 15:24:54.679218] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:08.231 BaseBdev2 00:17:08.231 [2024-11-20 15:24:54.679787] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:08.231 [2024-11-20 15:24:54.679809] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:08.231 [2024-11-20 15:24:54.680088] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:17:08.231 [2024-11-20 15:24:54.680260] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:08.231 [2024-11-20 15:24:54.680277] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:17:08.231 [2024-11-20 15:24:54.680416] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:08.231 15:24:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.231 15:24:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:17:08.231 15:24:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:17:08.231 15:24:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:08.231 15:24:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # local i 00:17:08.231 15:24:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:08.231 15:24:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:08.231 15:24:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:08.231 15:24:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.232 15:24:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:08.232 15:24:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.232 15:24:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:08.232 15:24:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.232 15:24:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:08.232 [ 00:17:08.232 { 00:17:08.232 "name": "BaseBdev2", 00:17:08.232 "aliases": [ 00:17:08.232 "0d623671-8f38-4a29-a220-119940ddd141" 00:17:08.232 ], 00:17:08.232 "product_name": "Malloc disk", 00:17:08.232 "block_size": 4096, 00:17:08.232 "num_blocks": 8192, 00:17:08.232 "uuid": "0d623671-8f38-4a29-a220-119940ddd141", 00:17:08.232 "assigned_rate_limits": { 00:17:08.232 "rw_ios_per_sec": 0, 00:17:08.232 "rw_mbytes_per_sec": 0, 00:17:08.232 "r_mbytes_per_sec": 0, 00:17:08.232 "w_mbytes_per_sec": 0 00:17:08.232 }, 00:17:08.232 "claimed": true, 00:17:08.490 "claim_type": "exclusive_write", 00:17:08.490 "zoned": false, 00:17:08.490 "supported_io_types": { 00:17:08.490 "read": true, 00:17:08.490 "write": true, 00:17:08.490 "unmap": true, 00:17:08.490 "flush": true, 00:17:08.490 "reset": true, 00:17:08.490 "nvme_admin": false, 00:17:08.490 "nvme_io": false, 00:17:08.490 "nvme_io_md": false, 00:17:08.490 "write_zeroes": true, 00:17:08.490 "zcopy": true, 00:17:08.490 "get_zone_info": false, 00:17:08.490 "zone_management": false, 00:17:08.490 "zone_append": false, 00:17:08.490 "compare": false, 00:17:08.490 "compare_and_write": false, 00:17:08.490 "abort": true, 00:17:08.490 "seek_hole": false, 00:17:08.490 "seek_data": false, 00:17:08.490 "copy": true, 00:17:08.490 "nvme_iov_md": false 00:17:08.490 }, 00:17:08.490 "memory_domains": [ 00:17:08.490 { 00:17:08.490 "dma_device_id": "system", 00:17:08.490 "dma_device_type": 1 00:17:08.490 }, 00:17:08.490 { 00:17:08.490 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:08.490 "dma_device_type": 2 00:17:08.490 } 00:17:08.490 ], 00:17:08.490 "driver_specific": {} 00:17:08.490 } 00:17:08.490 ] 00:17:08.490 15:24:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.490 15:24:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@911 -- # return 0 00:17:08.490 15:24:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:08.490 15:24:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:08.490 15:24:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:17:08.490 15:24:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:08.490 15:24:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:08.490 15:24:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:08.490 15:24:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:08.490 15:24:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:08.491 15:24:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:08.491 15:24:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:08.491 15:24:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:08.491 15:24:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:08.491 15:24:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:08.491 15:24:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:08.491 15:24:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.491 15:24:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:08.491 15:24:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.491 15:24:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:08.491 "name": "Existed_Raid", 00:17:08.491 "uuid": "4402f688-f52f-4719-8e1e-3f0d2f4ea720", 00:17:08.491 "strip_size_kb": 0, 00:17:08.491 "state": "online", 00:17:08.491 "raid_level": "raid1", 00:17:08.491 "superblock": true, 00:17:08.491 "num_base_bdevs": 2, 00:17:08.491 "num_base_bdevs_discovered": 2, 00:17:08.491 "num_base_bdevs_operational": 2, 00:17:08.491 "base_bdevs_list": [ 00:17:08.491 { 00:17:08.491 "name": "BaseBdev1", 00:17:08.491 "uuid": "5e529583-af83-48f6-a7fc-fb2f8c4272c4", 00:17:08.491 "is_configured": true, 00:17:08.491 "data_offset": 256, 00:17:08.491 "data_size": 7936 00:17:08.491 }, 00:17:08.491 { 00:17:08.491 "name": "BaseBdev2", 00:17:08.491 "uuid": "0d623671-8f38-4a29-a220-119940ddd141", 00:17:08.491 "is_configured": true, 00:17:08.491 "data_offset": 256, 00:17:08.491 "data_size": 7936 00:17:08.491 } 00:17:08.491 ] 00:17:08.491 }' 00:17:08.491 15:24:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:08.491 15:24:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:08.750 15:24:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:17:08.750 15:24:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:08.750 15:24:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:08.750 15:24:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:08.750 15:24:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local name 00:17:08.750 15:24:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:08.750 15:24:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:08.751 15:24:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.751 15:24:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:08.751 15:24:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:08.751 [2024-11-20 15:24:55.159100] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:08.751 15:24:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.751 15:24:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:08.751 "name": "Existed_Raid", 00:17:08.751 "aliases": [ 00:17:08.751 "4402f688-f52f-4719-8e1e-3f0d2f4ea720" 00:17:08.751 ], 00:17:08.751 "product_name": "Raid Volume", 00:17:08.751 "block_size": 4096, 00:17:08.751 "num_blocks": 7936, 00:17:08.751 "uuid": "4402f688-f52f-4719-8e1e-3f0d2f4ea720", 00:17:08.751 "assigned_rate_limits": { 00:17:08.751 "rw_ios_per_sec": 0, 00:17:08.751 "rw_mbytes_per_sec": 0, 00:17:08.751 "r_mbytes_per_sec": 0, 00:17:08.751 "w_mbytes_per_sec": 0 00:17:08.751 }, 00:17:08.751 "claimed": false, 00:17:08.751 "zoned": false, 00:17:08.751 "supported_io_types": { 00:17:08.751 "read": true, 00:17:08.751 "write": true, 00:17:08.751 "unmap": false, 00:17:08.751 "flush": false, 00:17:08.751 "reset": true, 00:17:08.751 "nvme_admin": false, 00:17:08.751 "nvme_io": false, 00:17:08.751 "nvme_io_md": false, 00:17:08.751 "write_zeroes": true, 00:17:08.751 "zcopy": false, 00:17:08.751 "get_zone_info": false, 00:17:08.751 "zone_management": false, 00:17:08.751 "zone_append": false, 00:17:08.751 "compare": false, 00:17:08.751 "compare_and_write": false, 00:17:08.751 "abort": false, 00:17:08.751 "seek_hole": false, 00:17:08.751 "seek_data": false, 00:17:08.751 "copy": false, 00:17:08.751 "nvme_iov_md": false 00:17:08.751 }, 00:17:08.751 "memory_domains": [ 00:17:08.751 { 00:17:08.751 "dma_device_id": "system", 00:17:08.751 "dma_device_type": 1 00:17:08.751 }, 00:17:08.751 { 00:17:08.751 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:08.751 "dma_device_type": 2 00:17:08.751 }, 00:17:08.751 { 00:17:08.751 "dma_device_id": "system", 00:17:08.751 "dma_device_type": 1 00:17:08.751 }, 00:17:08.751 { 00:17:08.751 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:08.751 "dma_device_type": 2 00:17:08.751 } 00:17:08.751 ], 00:17:08.751 "driver_specific": { 00:17:08.751 "raid": { 00:17:08.751 "uuid": "4402f688-f52f-4719-8e1e-3f0d2f4ea720", 00:17:08.751 "strip_size_kb": 0, 00:17:08.751 "state": "online", 00:17:08.751 "raid_level": "raid1", 00:17:08.751 "superblock": true, 00:17:08.751 "num_base_bdevs": 2, 00:17:08.751 "num_base_bdevs_discovered": 2, 00:17:08.751 "num_base_bdevs_operational": 2, 00:17:08.751 "base_bdevs_list": [ 00:17:08.751 { 00:17:08.751 "name": "BaseBdev1", 00:17:08.751 "uuid": "5e529583-af83-48f6-a7fc-fb2f8c4272c4", 00:17:08.751 "is_configured": true, 00:17:08.751 "data_offset": 256, 00:17:08.751 "data_size": 7936 00:17:08.751 }, 00:17:08.751 { 00:17:08.751 "name": "BaseBdev2", 00:17:08.751 "uuid": "0d623671-8f38-4a29-a220-119940ddd141", 00:17:08.751 "is_configured": true, 00:17:08.751 "data_offset": 256, 00:17:08.751 "data_size": 7936 00:17:08.751 } 00:17:08.751 ] 00:17:08.751 } 00:17:08.751 } 00:17:08.751 }' 00:17:08.751 15:24:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:09.011 15:24:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:17:09.011 BaseBdev2' 00:17:09.011 15:24:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:09.011 15:24:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:17:09.011 15:24:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:09.011 15:24:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:17:09.011 15:24:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:09.011 15:24:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.011 15:24:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:09.011 15:24:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.011 15:24:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:17:09.011 15:24:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:17:09.011 15:24:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:09.011 15:24:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:09.011 15:24:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:09.011 15:24:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.011 15:24:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:09.011 15:24:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.011 15:24:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:17:09.011 15:24:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:17:09.011 15:24:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:17:09.011 15:24:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.011 15:24:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:09.011 [2024-11-20 15:24:55.386907] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:09.011 15:24:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.011 15:24:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@260 -- # local expected_state 00:17:09.011 15:24:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:17:09.011 15:24:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:09.011 15:24:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:17:09.011 15:24:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:17:09.011 15:24:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:17:09.011 15:24:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:09.011 15:24:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:09.011 15:24:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:09.011 15:24:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:09.011 15:24:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:09.011 15:24:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:09.011 15:24:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:09.011 15:24:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:09.011 15:24:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:09.272 15:24:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:09.272 15:24:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.272 15:24:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:09.272 15:24:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:09.272 15:24:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.272 15:24:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:09.272 "name": "Existed_Raid", 00:17:09.272 "uuid": "4402f688-f52f-4719-8e1e-3f0d2f4ea720", 00:17:09.272 "strip_size_kb": 0, 00:17:09.272 "state": "online", 00:17:09.272 "raid_level": "raid1", 00:17:09.272 "superblock": true, 00:17:09.272 "num_base_bdevs": 2, 00:17:09.272 "num_base_bdevs_discovered": 1, 00:17:09.272 "num_base_bdevs_operational": 1, 00:17:09.272 "base_bdevs_list": [ 00:17:09.272 { 00:17:09.272 "name": null, 00:17:09.272 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:09.272 "is_configured": false, 00:17:09.272 "data_offset": 0, 00:17:09.272 "data_size": 7936 00:17:09.272 }, 00:17:09.272 { 00:17:09.272 "name": "BaseBdev2", 00:17:09.272 "uuid": "0d623671-8f38-4a29-a220-119940ddd141", 00:17:09.272 "is_configured": true, 00:17:09.272 "data_offset": 256, 00:17:09.272 "data_size": 7936 00:17:09.272 } 00:17:09.272 ] 00:17:09.272 }' 00:17:09.272 15:24:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:09.272 15:24:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:09.531 15:24:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:17:09.531 15:24:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:09.531 15:24:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:09.531 15:24:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.531 15:24:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:09.531 15:24:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:09.531 15:24:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.531 15:24:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:09.531 15:24:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:09.531 15:24:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:17:09.531 15:24:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.531 15:24:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:09.531 [2024-11-20 15:24:55.979839] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:09.531 [2024-11-20 15:24:55.980149] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:09.791 [2024-11-20 15:24:56.075017] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:09.791 [2024-11-20 15:24:56.075080] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:09.791 [2024-11-20 15:24:56.075095] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:17:09.791 15:24:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.791 15:24:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:09.791 15:24:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:09.791 15:24:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:09.791 15:24:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.791 15:24:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:17:09.791 15:24:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:09.791 15:24:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.791 15:24:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:17:09.791 15:24:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:17:09.791 15:24:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:17:09.791 15:24:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@326 -- # killprocess 85726 00:17:09.791 15:24:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@954 -- # '[' -z 85726 ']' 00:17:09.791 15:24:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@958 -- # kill -0 85726 00:17:09.791 15:24:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@959 -- # uname 00:17:09.791 15:24:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:09.791 15:24:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85726 00:17:09.791 killing process with pid 85726 00:17:09.791 15:24:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:09.791 15:24:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:09.791 15:24:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85726' 00:17:09.791 15:24:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@973 -- # kill 85726 00:17:09.791 [2024-11-20 15:24:56.171549] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:09.791 15:24:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@978 -- # wait 85726 00:17:09.791 [2024-11-20 15:24:56.188240] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:11.173 15:24:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@328 -- # return 0 00:17:11.173 00:17:11.173 real 0m5.067s 00:17:11.173 user 0m7.267s 00:17:11.173 sys 0m0.968s 00:17:11.173 15:24:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:11.173 15:24:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:11.173 ************************************ 00:17:11.173 END TEST raid_state_function_test_sb_4k 00:17:11.173 ************************************ 00:17:11.173 15:24:57 bdev_raid -- bdev/bdev_raid.sh@998 -- # run_test raid_superblock_test_4k raid_superblock_test raid1 2 00:17:11.173 15:24:57 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:17:11.173 15:24:57 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:11.173 15:24:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:11.173 ************************************ 00:17:11.173 START TEST raid_superblock_test_4k 00:17:11.173 ************************************ 00:17:11.173 15:24:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:17:11.173 15:24:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:17:11.173 15:24:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:17:11.173 15:24:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:17:11.173 15:24:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:17:11.173 15:24:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:17:11.173 15:24:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:17:11.173 15:24:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:17:11.173 15:24:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:17:11.173 15:24:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:17:11.173 15:24:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@399 -- # local strip_size 00:17:11.173 15:24:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:17:11.173 15:24:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:17:11.173 15:24:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:17:11.173 15:24:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:17:11.173 15:24:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:17:11.173 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:11.173 15:24:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@412 -- # raid_pid=85974 00:17:11.173 15:24:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:17:11.173 15:24:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@413 -- # waitforlisten 85974 00:17:11.173 15:24:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@835 -- # '[' -z 85974 ']' 00:17:11.173 15:24:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:11.173 15:24:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:11.173 15:24:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:11.173 15:24:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:11.173 15:24:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:11.174 [2024-11-20 15:24:57.503955] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:17:11.174 [2024-11-20 15:24:57.504319] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85974 ] 00:17:11.433 [2024-11-20 15:24:57.683434] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:11.433 [2024-11-20 15:24:57.800065] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:11.692 [2024-11-20 15:24:58.015538] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:11.692 [2024-11-20 15:24:58.015606] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:11.951 15:24:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:11.951 15:24:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@868 -- # return 0 00:17:11.951 15:24:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:17:11.951 15:24:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:11.951 15:24:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:17:11.951 15:24:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:17:11.951 15:24:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:17:11.951 15:24:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:11.951 15:24:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:11.951 15:24:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:11.952 15:24:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc1 00:17:11.952 15:24:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.952 15:24:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:11.952 malloc1 00:17:11.952 15:24:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.952 15:24:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:11.952 15:24:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.952 15:24:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:11.952 [2024-11-20 15:24:58.407746] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:11.952 [2024-11-20 15:24:58.407840] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:11.952 [2024-11-20 15:24:58.407867] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:11.952 [2024-11-20 15:24:58.407880] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:11.952 [2024-11-20 15:24:58.410476] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:11.952 [2024-11-20 15:24:58.410518] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:11.952 pt1 00:17:11.952 15:24:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.952 15:24:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:11.952 15:24:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:11.952 15:24:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:17:11.952 15:24:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:17:11.952 15:24:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:17:11.952 15:24:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:11.952 15:24:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:11.952 15:24:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:11.952 15:24:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc2 00:17:11.952 15:24:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.952 15:24:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:12.211 malloc2 00:17:12.211 15:24:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.211 15:24:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:12.211 15:24:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.211 15:24:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:12.211 [2024-11-20 15:24:58.461377] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:12.211 [2024-11-20 15:24:58.461444] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:12.211 [2024-11-20 15:24:58.461478] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:12.211 [2024-11-20 15:24:58.461491] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:12.211 [2024-11-20 15:24:58.463953] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:12.211 [2024-11-20 15:24:58.463993] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:12.211 pt2 00:17:12.211 15:24:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.211 15:24:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:12.211 15:24:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:12.211 15:24:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:17:12.211 15:24:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.211 15:24:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:12.211 [2024-11-20 15:24:58.473436] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:12.211 [2024-11-20 15:24:58.475642] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:12.211 [2024-11-20 15:24:58.475850] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:12.211 [2024-11-20 15:24:58.475869] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:12.211 [2024-11-20 15:24:58.476165] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:17:12.211 [2024-11-20 15:24:58.476337] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:12.211 [2024-11-20 15:24:58.476354] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:12.211 [2024-11-20 15:24:58.476548] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:12.211 15:24:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.211 15:24:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:12.211 15:24:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:12.211 15:24:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:12.211 15:24:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:12.211 15:24:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:12.211 15:24:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:12.211 15:24:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:12.211 15:24:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:12.211 15:24:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:12.211 15:24:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:12.211 15:24:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:12.211 15:24:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:12.211 15:24:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.211 15:24:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:12.211 15:24:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.211 15:24:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:12.211 "name": "raid_bdev1", 00:17:12.211 "uuid": "5d90acd5-2e08-453c-abad-a7162eb235dc", 00:17:12.211 "strip_size_kb": 0, 00:17:12.211 "state": "online", 00:17:12.211 "raid_level": "raid1", 00:17:12.211 "superblock": true, 00:17:12.211 "num_base_bdevs": 2, 00:17:12.211 "num_base_bdevs_discovered": 2, 00:17:12.211 "num_base_bdevs_operational": 2, 00:17:12.211 "base_bdevs_list": [ 00:17:12.211 { 00:17:12.211 "name": "pt1", 00:17:12.211 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:12.211 "is_configured": true, 00:17:12.211 "data_offset": 256, 00:17:12.211 "data_size": 7936 00:17:12.211 }, 00:17:12.211 { 00:17:12.211 "name": "pt2", 00:17:12.211 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:12.211 "is_configured": true, 00:17:12.211 "data_offset": 256, 00:17:12.211 "data_size": 7936 00:17:12.211 } 00:17:12.211 ] 00:17:12.211 }' 00:17:12.211 15:24:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:12.211 15:24:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:12.471 15:24:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:17:12.471 15:24:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:12.471 15:24:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:12.471 15:24:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:12.471 15:24:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:17:12.471 15:24:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:12.471 15:24:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:12.471 15:24:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:12.471 15:24:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.471 15:24:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:12.471 [2024-11-20 15:24:58.913136] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:12.471 15:24:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.471 15:24:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:12.471 "name": "raid_bdev1", 00:17:12.471 "aliases": [ 00:17:12.471 "5d90acd5-2e08-453c-abad-a7162eb235dc" 00:17:12.471 ], 00:17:12.471 "product_name": "Raid Volume", 00:17:12.471 "block_size": 4096, 00:17:12.471 "num_blocks": 7936, 00:17:12.471 "uuid": "5d90acd5-2e08-453c-abad-a7162eb235dc", 00:17:12.471 "assigned_rate_limits": { 00:17:12.471 "rw_ios_per_sec": 0, 00:17:12.471 "rw_mbytes_per_sec": 0, 00:17:12.471 "r_mbytes_per_sec": 0, 00:17:12.471 "w_mbytes_per_sec": 0 00:17:12.471 }, 00:17:12.471 "claimed": false, 00:17:12.471 "zoned": false, 00:17:12.471 "supported_io_types": { 00:17:12.471 "read": true, 00:17:12.471 "write": true, 00:17:12.471 "unmap": false, 00:17:12.471 "flush": false, 00:17:12.471 "reset": true, 00:17:12.471 "nvme_admin": false, 00:17:12.471 "nvme_io": false, 00:17:12.471 "nvme_io_md": false, 00:17:12.471 "write_zeroes": true, 00:17:12.471 "zcopy": false, 00:17:12.471 "get_zone_info": false, 00:17:12.471 "zone_management": false, 00:17:12.471 "zone_append": false, 00:17:12.471 "compare": false, 00:17:12.471 "compare_and_write": false, 00:17:12.471 "abort": false, 00:17:12.471 "seek_hole": false, 00:17:12.471 "seek_data": false, 00:17:12.471 "copy": false, 00:17:12.471 "nvme_iov_md": false 00:17:12.471 }, 00:17:12.471 "memory_domains": [ 00:17:12.471 { 00:17:12.471 "dma_device_id": "system", 00:17:12.471 "dma_device_type": 1 00:17:12.471 }, 00:17:12.471 { 00:17:12.471 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:12.471 "dma_device_type": 2 00:17:12.471 }, 00:17:12.471 { 00:17:12.471 "dma_device_id": "system", 00:17:12.471 "dma_device_type": 1 00:17:12.471 }, 00:17:12.471 { 00:17:12.471 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:12.471 "dma_device_type": 2 00:17:12.471 } 00:17:12.471 ], 00:17:12.471 "driver_specific": { 00:17:12.471 "raid": { 00:17:12.471 "uuid": "5d90acd5-2e08-453c-abad-a7162eb235dc", 00:17:12.471 "strip_size_kb": 0, 00:17:12.471 "state": "online", 00:17:12.471 "raid_level": "raid1", 00:17:12.471 "superblock": true, 00:17:12.471 "num_base_bdevs": 2, 00:17:12.471 "num_base_bdevs_discovered": 2, 00:17:12.471 "num_base_bdevs_operational": 2, 00:17:12.471 "base_bdevs_list": [ 00:17:12.471 { 00:17:12.471 "name": "pt1", 00:17:12.471 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:12.471 "is_configured": true, 00:17:12.471 "data_offset": 256, 00:17:12.471 "data_size": 7936 00:17:12.471 }, 00:17:12.471 { 00:17:12.471 "name": "pt2", 00:17:12.472 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:12.472 "is_configured": true, 00:17:12.472 "data_offset": 256, 00:17:12.472 "data_size": 7936 00:17:12.472 } 00:17:12.472 ] 00:17:12.472 } 00:17:12.472 } 00:17:12.472 }' 00:17:12.760 15:24:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:12.760 15:24:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:12.760 pt2' 00:17:12.760 15:24:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:12.760 15:24:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:17:12.760 15:24:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:12.760 15:24:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:12.760 15:24:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:12.760 15:24:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.760 15:24:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:12.760 15:24:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.760 15:24:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:17:12.760 15:24:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:17:12.760 15:24:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:12.760 15:24:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:12.760 15:24:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:12.760 15:24:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.760 15:24:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:12.760 15:24:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.761 15:24:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:17:12.761 15:24:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:17:12.761 15:24:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:17:12.761 15:24:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:12.761 15:24:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.761 15:24:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:12.761 [2024-11-20 15:24:59.128836] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:12.761 15:24:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.761 15:24:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=5d90acd5-2e08-453c-abad-a7162eb235dc 00:17:12.761 15:24:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@436 -- # '[' -z 5d90acd5-2e08-453c-abad-a7162eb235dc ']' 00:17:12.761 15:24:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:12.761 15:24:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.761 15:24:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:12.761 [2024-11-20 15:24:59.168471] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:12.761 [2024-11-20 15:24:59.168508] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:12.761 [2024-11-20 15:24:59.168600] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:12.761 [2024-11-20 15:24:59.168672] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:12.761 [2024-11-20 15:24:59.168689] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:12.761 15:24:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.761 15:24:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:12.761 15:24:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.761 15:24:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:12.761 15:24:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:17:12.761 15:24:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.761 15:24:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:17:12.761 15:24:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:17:12.761 15:24:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:12.761 15:24:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:17:12.761 15:24:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.761 15:24:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:12.761 15:24:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.761 15:24:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:12.761 15:24:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:17:12.761 15:24:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.761 15:24:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:13.029 15:24:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.029 15:24:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:17:13.029 15:24:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.029 15:24:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:13.029 15:24:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:17:13.029 15:24:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.029 15:24:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:17:13.029 15:24:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:17:13.029 15:24:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@652 -- # local es=0 00:17:13.029 15:24:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:17:13.029 15:24:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:13.029 15:24:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:13.029 15:24:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:13.029 15:24:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:13.029 15:24:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:17:13.029 15:24:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.029 15:24:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:13.029 [2024-11-20 15:24:59.296323] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:17:13.029 [2024-11-20 15:24:59.298469] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:17:13.029 [2024-11-20 15:24:59.298546] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:17:13.029 [2024-11-20 15:24:59.298603] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:17:13.029 [2024-11-20 15:24:59.298622] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:13.029 [2024-11-20 15:24:59.298635] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:17:13.029 request: 00:17:13.029 { 00:17:13.029 "name": "raid_bdev1", 00:17:13.029 "raid_level": "raid1", 00:17:13.029 "base_bdevs": [ 00:17:13.029 "malloc1", 00:17:13.029 "malloc2" 00:17:13.029 ], 00:17:13.029 "superblock": false, 00:17:13.029 "method": "bdev_raid_create", 00:17:13.029 "req_id": 1 00:17:13.029 } 00:17:13.029 Got JSON-RPC error response 00:17:13.029 response: 00:17:13.029 { 00:17:13.029 "code": -17, 00:17:13.029 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:17:13.029 } 00:17:13.029 15:24:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:13.029 15:24:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@655 -- # es=1 00:17:13.029 15:24:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:13.029 15:24:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:13.029 15:24:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:13.029 15:24:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:13.029 15:24:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.029 15:24:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:13.029 15:24:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:17:13.029 15:24:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.029 15:24:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:17:13.029 15:24:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:17:13.029 15:24:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:13.029 15:24:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.029 15:24:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:13.029 [2024-11-20 15:24:59.360228] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:13.029 [2024-11-20 15:24:59.360302] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:13.029 [2024-11-20 15:24:59.360326] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:17:13.029 [2024-11-20 15:24:59.360341] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:13.029 [2024-11-20 15:24:59.362848] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:13.029 [2024-11-20 15:24:59.362890] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:13.029 [2024-11-20 15:24:59.362979] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:13.029 [2024-11-20 15:24:59.363038] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:13.029 pt1 00:17:13.029 15:24:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.029 15:24:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:17:13.029 15:24:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:13.029 15:24:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:13.029 15:24:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:13.029 15:24:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:13.029 15:24:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:13.029 15:24:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:13.029 15:24:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:13.029 15:24:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:13.029 15:24:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:13.029 15:24:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:13.029 15:24:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:13.029 15:24:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.029 15:24:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:13.029 15:24:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.029 15:24:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:13.029 "name": "raid_bdev1", 00:17:13.029 "uuid": "5d90acd5-2e08-453c-abad-a7162eb235dc", 00:17:13.029 "strip_size_kb": 0, 00:17:13.029 "state": "configuring", 00:17:13.029 "raid_level": "raid1", 00:17:13.029 "superblock": true, 00:17:13.029 "num_base_bdevs": 2, 00:17:13.029 "num_base_bdevs_discovered": 1, 00:17:13.029 "num_base_bdevs_operational": 2, 00:17:13.029 "base_bdevs_list": [ 00:17:13.029 { 00:17:13.029 "name": "pt1", 00:17:13.029 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:13.029 "is_configured": true, 00:17:13.029 "data_offset": 256, 00:17:13.029 "data_size": 7936 00:17:13.029 }, 00:17:13.029 { 00:17:13.029 "name": null, 00:17:13.029 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:13.029 "is_configured": false, 00:17:13.029 "data_offset": 256, 00:17:13.029 "data_size": 7936 00:17:13.029 } 00:17:13.029 ] 00:17:13.029 }' 00:17:13.029 15:24:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:13.029 15:24:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:13.288 15:24:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:17:13.288 15:24:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:17:13.288 15:24:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:13.288 15:24:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:13.288 15:24:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.288 15:24:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:13.288 [2024-11-20 15:24:59.739743] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:13.288 [2024-11-20 15:24:59.739836] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:13.288 [2024-11-20 15:24:59.739877] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:17:13.288 [2024-11-20 15:24:59.739893] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:13.288 [2024-11-20 15:24:59.740374] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:13.288 [2024-11-20 15:24:59.740399] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:13.288 [2024-11-20 15:24:59.740483] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:13.288 [2024-11-20 15:24:59.740513] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:13.288 [2024-11-20 15:24:59.740633] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:13.288 [2024-11-20 15:24:59.740647] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:13.288 [2024-11-20 15:24:59.740925] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:17:13.288 [2024-11-20 15:24:59.741075] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:13.288 [2024-11-20 15:24:59.741084] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:17:13.288 [2024-11-20 15:24:59.741243] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:13.288 pt2 00:17:13.288 15:24:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.288 15:24:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:17:13.288 15:24:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:13.288 15:24:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:13.288 15:24:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:13.288 15:24:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:13.288 15:24:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:13.288 15:24:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:13.288 15:24:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:13.288 15:24:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:13.288 15:24:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:13.288 15:24:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:13.288 15:24:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:13.288 15:24:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:13.288 15:24:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.288 15:24:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:13.288 15:24:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:13.288 15:24:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.548 15:24:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:13.548 "name": "raid_bdev1", 00:17:13.548 "uuid": "5d90acd5-2e08-453c-abad-a7162eb235dc", 00:17:13.548 "strip_size_kb": 0, 00:17:13.548 "state": "online", 00:17:13.548 "raid_level": "raid1", 00:17:13.548 "superblock": true, 00:17:13.548 "num_base_bdevs": 2, 00:17:13.548 "num_base_bdevs_discovered": 2, 00:17:13.548 "num_base_bdevs_operational": 2, 00:17:13.548 "base_bdevs_list": [ 00:17:13.548 { 00:17:13.548 "name": "pt1", 00:17:13.548 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:13.548 "is_configured": true, 00:17:13.548 "data_offset": 256, 00:17:13.548 "data_size": 7936 00:17:13.548 }, 00:17:13.548 { 00:17:13.548 "name": "pt2", 00:17:13.548 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:13.548 "is_configured": true, 00:17:13.548 "data_offset": 256, 00:17:13.548 "data_size": 7936 00:17:13.548 } 00:17:13.548 ] 00:17:13.548 }' 00:17:13.548 15:24:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:13.548 15:24:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:13.808 15:25:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:17:13.808 15:25:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:13.808 15:25:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:13.808 15:25:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:13.808 15:25:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:17:13.808 15:25:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:13.808 15:25:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:13.808 15:25:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.808 15:25:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:13.808 15:25:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:13.808 [2024-11-20 15:25:00.215240] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:13.808 15:25:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.808 15:25:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:13.808 "name": "raid_bdev1", 00:17:13.808 "aliases": [ 00:17:13.808 "5d90acd5-2e08-453c-abad-a7162eb235dc" 00:17:13.808 ], 00:17:13.808 "product_name": "Raid Volume", 00:17:13.809 "block_size": 4096, 00:17:13.809 "num_blocks": 7936, 00:17:13.809 "uuid": "5d90acd5-2e08-453c-abad-a7162eb235dc", 00:17:13.809 "assigned_rate_limits": { 00:17:13.809 "rw_ios_per_sec": 0, 00:17:13.809 "rw_mbytes_per_sec": 0, 00:17:13.809 "r_mbytes_per_sec": 0, 00:17:13.809 "w_mbytes_per_sec": 0 00:17:13.809 }, 00:17:13.809 "claimed": false, 00:17:13.809 "zoned": false, 00:17:13.809 "supported_io_types": { 00:17:13.809 "read": true, 00:17:13.809 "write": true, 00:17:13.809 "unmap": false, 00:17:13.809 "flush": false, 00:17:13.809 "reset": true, 00:17:13.809 "nvme_admin": false, 00:17:13.809 "nvme_io": false, 00:17:13.809 "nvme_io_md": false, 00:17:13.809 "write_zeroes": true, 00:17:13.809 "zcopy": false, 00:17:13.809 "get_zone_info": false, 00:17:13.809 "zone_management": false, 00:17:13.809 "zone_append": false, 00:17:13.809 "compare": false, 00:17:13.809 "compare_and_write": false, 00:17:13.809 "abort": false, 00:17:13.809 "seek_hole": false, 00:17:13.809 "seek_data": false, 00:17:13.809 "copy": false, 00:17:13.809 "nvme_iov_md": false 00:17:13.809 }, 00:17:13.809 "memory_domains": [ 00:17:13.809 { 00:17:13.809 "dma_device_id": "system", 00:17:13.809 "dma_device_type": 1 00:17:13.809 }, 00:17:13.809 { 00:17:13.809 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:13.809 "dma_device_type": 2 00:17:13.809 }, 00:17:13.809 { 00:17:13.809 "dma_device_id": "system", 00:17:13.809 "dma_device_type": 1 00:17:13.809 }, 00:17:13.809 { 00:17:13.809 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:13.809 "dma_device_type": 2 00:17:13.809 } 00:17:13.809 ], 00:17:13.809 "driver_specific": { 00:17:13.809 "raid": { 00:17:13.809 "uuid": "5d90acd5-2e08-453c-abad-a7162eb235dc", 00:17:13.809 "strip_size_kb": 0, 00:17:13.809 "state": "online", 00:17:13.809 "raid_level": "raid1", 00:17:13.809 "superblock": true, 00:17:13.809 "num_base_bdevs": 2, 00:17:13.809 "num_base_bdevs_discovered": 2, 00:17:13.809 "num_base_bdevs_operational": 2, 00:17:13.809 "base_bdevs_list": [ 00:17:13.809 { 00:17:13.809 "name": "pt1", 00:17:13.809 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:13.809 "is_configured": true, 00:17:13.809 "data_offset": 256, 00:17:13.809 "data_size": 7936 00:17:13.809 }, 00:17:13.809 { 00:17:13.809 "name": "pt2", 00:17:13.809 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:13.809 "is_configured": true, 00:17:13.809 "data_offset": 256, 00:17:13.809 "data_size": 7936 00:17:13.809 } 00:17:13.809 ] 00:17:13.809 } 00:17:13.809 } 00:17:13.809 }' 00:17:13.809 15:25:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:14.068 15:25:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:14.068 pt2' 00:17:14.068 15:25:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:14.068 15:25:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:17:14.068 15:25:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:14.068 15:25:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:14.069 15:25:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.069 15:25:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:14.069 15:25:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:14.069 15:25:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.069 15:25:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:17:14.069 15:25:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:17:14.069 15:25:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:14.069 15:25:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:14.069 15:25:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.069 15:25:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:14.069 15:25:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:14.069 15:25:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.069 15:25:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:17:14.069 15:25:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:17:14.069 15:25:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:17:14.069 15:25:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:14.069 15:25:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.069 15:25:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:14.069 [2024-11-20 15:25:00.435093] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:14.069 15:25:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.069 15:25:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # '[' 5d90acd5-2e08-453c-abad-a7162eb235dc '!=' 5d90acd5-2e08-453c-abad-a7162eb235dc ']' 00:17:14.069 15:25:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:17:14.069 15:25:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:14.069 15:25:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:17:14.069 15:25:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:17:14.069 15:25:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.069 15:25:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:14.069 [2024-11-20 15:25:00.474907] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:17:14.069 15:25:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.069 15:25:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:14.069 15:25:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:14.069 15:25:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:14.069 15:25:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:14.069 15:25:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:14.069 15:25:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:14.069 15:25:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:14.069 15:25:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:14.069 15:25:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:14.069 15:25:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:14.069 15:25:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:14.069 15:25:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:14.069 15:25:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.069 15:25:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:14.069 15:25:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.069 15:25:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:14.069 "name": "raid_bdev1", 00:17:14.069 "uuid": "5d90acd5-2e08-453c-abad-a7162eb235dc", 00:17:14.069 "strip_size_kb": 0, 00:17:14.069 "state": "online", 00:17:14.069 "raid_level": "raid1", 00:17:14.069 "superblock": true, 00:17:14.069 "num_base_bdevs": 2, 00:17:14.069 "num_base_bdevs_discovered": 1, 00:17:14.069 "num_base_bdevs_operational": 1, 00:17:14.069 "base_bdevs_list": [ 00:17:14.069 { 00:17:14.069 "name": null, 00:17:14.069 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:14.069 "is_configured": false, 00:17:14.069 "data_offset": 0, 00:17:14.069 "data_size": 7936 00:17:14.069 }, 00:17:14.069 { 00:17:14.069 "name": "pt2", 00:17:14.069 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:14.069 "is_configured": true, 00:17:14.069 "data_offset": 256, 00:17:14.069 "data_size": 7936 00:17:14.069 } 00:17:14.069 ] 00:17:14.069 }' 00:17:14.069 15:25:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:14.069 15:25:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:14.639 15:25:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:14.639 15:25:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.639 15:25:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:14.639 [2024-11-20 15:25:00.910871] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:14.639 [2024-11-20 15:25:00.910913] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:14.639 [2024-11-20 15:25:00.911000] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:14.639 [2024-11-20 15:25:00.911049] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:14.639 [2024-11-20 15:25:00.911064] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:17:14.639 15:25:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.639 15:25:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:14.639 15:25:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:17:14.639 15:25:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.639 15:25:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:14.639 15:25:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.639 15:25:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:17:14.639 15:25:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:17:14.639 15:25:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:17:14.639 15:25:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:14.639 15:25:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:17:14.639 15:25:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.639 15:25:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:14.639 15:25:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.639 15:25:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:17:14.639 15:25:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:14.639 15:25:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:17:14.639 15:25:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:17:14.639 15:25:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@519 -- # i=1 00:17:14.639 15:25:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:14.639 15:25:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.639 15:25:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:14.639 [2024-11-20 15:25:00.978869] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:14.639 [2024-11-20 15:25:00.978943] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:14.639 [2024-11-20 15:25:00.978965] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:17:14.639 [2024-11-20 15:25:00.978980] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:14.639 [2024-11-20 15:25:00.981584] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:14.639 [2024-11-20 15:25:00.981632] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:14.639 [2024-11-20 15:25:00.981743] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:14.639 [2024-11-20 15:25:00.981798] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:14.639 [2024-11-20 15:25:00.981925] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:17:14.639 [2024-11-20 15:25:00.981941] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:14.639 [2024-11-20 15:25:00.982212] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:17:14.639 [2024-11-20 15:25:00.982357] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:17:14.639 [2024-11-20 15:25:00.982367] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:17:14.639 [2024-11-20 15:25:00.982527] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:14.639 pt2 00:17:14.639 15:25:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.639 15:25:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:14.639 15:25:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:14.639 15:25:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:14.639 15:25:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:14.639 15:25:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:14.639 15:25:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:14.639 15:25:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:14.639 15:25:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:14.639 15:25:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:14.639 15:25:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:14.639 15:25:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:14.639 15:25:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.639 15:25:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:14.639 15:25:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:14.639 15:25:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.639 15:25:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:14.639 "name": "raid_bdev1", 00:17:14.639 "uuid": "5d90acd5-2e08-453c-abad-a7162eb235dc", 00:17:14.639 "strip_size_kb": 0, 00:17:14.639 "state": "online", 00:17:14.639 "raid_level": "raid1", 00:17:14.639 "superblock": true, 00:17:14.639 "num_base_bdevs": 2, 00:17:14.639 "num_base_bdevs_discovered": 1, 00:17:14.639 "num_base_bdevs_operational": 1, 00:17:14.639 "base_bdevs_list": [ 00:17:14.639 { 00:17:14.639 "name": null, 00:17:14.639 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:14.639 "is_configured": false, 00:17:14.639 "data_offset": 256, 00:17:14.639 "data_size": 7936 00:17:14.639 }, 00:17:14.639 { 00:17:14.639 "name": "pt2", 00:17:14.639 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:14.639 "is_configured": true, 00:17:14.639 "data_offset": 256, 00:17:14.639 "data_size": 7936 00:17:14.639 } 00:17:14.639 ] 00:17:14.639 }' 00:17:14.639 15:25:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:14.639 15:25:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:15.208 15:25:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:15.208 15:25:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.208 15:25:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:15.209 [2024-11-20 15:25:01.390827] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:15.209 [2024-11-20 15:25:01.390868] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:15.209 [2024-11-20 15:25:01.390965] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:15.209 [2024-11-20 15:25:01.391021] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:15.209 [2024-11-20 15:25:01.391034] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:17:15.209 15:25:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.209 15:25:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:17:15.209 15:25:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:15.209 15:25:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.209 15:25:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:15.209 15:25:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.209 15:25:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:17:15.209 15:25:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:17:15.209 15:25:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:17:15.209 15:25:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:15.209 15:25:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.209 15:25:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:15.209 [2024-11-20 15:25:01.434862] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:15.209 [2024-11-20 15:25:01.434933] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:15.209 [2024-11-20 15:25:01.434957] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:17:15.209 [2024-11-20 15:25:01.434970] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:15.209 [2024-11-20 15:25:01.437453] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:15.209 [2024-11-20 15:25:01.437494] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:15.209 [2024-11-20 15:25:01.437588] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:15.209 [2024-11-20 15:25:01.437632] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:15.209 [2024-11-20 15:25:01.437785] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:17:15.209 [2024-11-20 15:25:01.437799] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:15.209 [2024-11-20 15:25:01.437818] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:17:15.209 [2024-11-20 15:25:01.437888] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:15.209 [2024-11-20 15:25:01.437959] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:17:15.209 [2024-11-20 15:25:01.437969] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:15.209 [2024-11-20 15:25:01.438236] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:15.209 [2024-11-20 15:25:01.438371] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:17:15.209 [2024-11-20 15:25:01.438385] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:17:15.209 [2024-11-20 15:25:01.438525] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:15.209 pt1 00:17:15.209 15:25:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.209 15:25:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:17:15.209 15:25:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:15.209 15:25:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:15.209 15:25:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:15.209 15:25:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:15.209 15:25:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:15.209 15:25:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:15.209 15:25:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:15.209 15:25:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:15.209 15:25:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:15.209 15:25:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:15.209 15:25:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:15.209 15:25:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:15.209 15:25:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.209 15:25:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:15.209 15:25:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.209 15:25:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:15.209 "name": "raid_bdev1", 00:17:15.209 "uuid": "5d90acd5-2e08-453c-abad-a7162eb235dc", 00:17:15.209 "strip_size_kb": 0, 00:17:15.209 "state": "online", 00:17:15.209 "raid_level": "raid1", 00:17:15.209 "superblock": true, 00:17:15.209 "num_base_bdevs": 2, 00:17:15.209 "num_base_bdevs_discovered": 1, 00:17:15.209 "num_base_bdevs_operational": 1, 00:17:15.209 "base_bdevs_list": [ 00:17:15.209 { 00:17:15.209 "name": null, 00:17:15.209 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:15.209 "is_configured": false, 00:17:15.209 "data_offset": 256, 00:17:15.209 "data_size": 7936 00:17:15.209 }, 00:17:15.209 { 00:17:15.209 "name": "pt2", 00:17:15.209 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:15.209 "is_configured": true, 00:17:15.209 "data_offset": 256, 00:17:15.209 "data_size": 7936 00:17:15.209 } 00:17:15.209 ] 00:17:15.209 }' 00:17:15.209 15:25:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:15.209 15:25:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:15.469 15:25:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:17:15.469 15:25:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.469 15:25:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:15.469 15:25:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:17:15.469 15:25:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.469 15:25:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:17:15.469 15:25:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:15.469 15:25:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.469 15:25:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:17:15.469 15:25:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:15.469 [2024-11-20 15:25:01.907080] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:15.469 15:25:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.469 15:25:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # '[' 5d90acd5-2e08-453c-abad-a7162eb235dc '!=' 5d90acd5-2e08-453c-abad-a7162eb235dc ']' 00:17:15.469 15:25:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@563 -- # killprocess 85974 00:17:15.469 15:25:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@954 -- # '[' -z 85974 ']' 00:17:15.469 15:25:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@958 -- # kill -0 85974 00:17:15.728 15:25:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@959 -- # uname 00:17:15.728 15:25:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:15.728 15:25:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85974 00:17:15.728 15:25:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:15.728 killing process with pid 85974 00:17:15.728 15:25:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:15.728 15:25:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85974' 00:17:15.728 15:25:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@973 -- # kill 85974 00:17:15.728 [2024-11-20 15:25:01.991828] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:15.728 [2024-11-20 15:25:01.991931] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:15.728 15:25:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@978 -- # wait 85974 00:17:15.728 [2024-11-20 15:25:01.991980] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:15.728 [2024-11-20 15:25:01.992001] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:17:15.728 [2024-11-20 15:25:02.201794] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:17.107 15:25:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@565 -- # return 0 00:17:17.107 00:17:17.107 real 0m5.944s 00:17:17.107 user 0m8.888s 00:17:17.107 sys 0m1.236s 00:17:17.107 15:25:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:17.107 ************************************ 00:17:17.107 END TEST raid_superblock_test_4k 00:17:17.107 ************************************ 00:17:17.107 15:25:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:17.107 15:25:03 bdev_raid -- bdev/bdev_raid.sh@999 -- # '[' true = true ']' 00:17:17.107 15:25:03 bdev_raid -- bdev/bdev_raid.sh@1000 -- # run_test raid_rebuild_test_sb_4k raid_rebuild_test raid1 2 true false true 00:17:17.107 15:25:03 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:17:17.107 15:25:03 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:17.107 15:25:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:17.107 ************************************ 00:17:17.107 START TEST raid_rebuild_test_sb_4k 00:17:17.107 ************************************ 00:17:17.107 15:25:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:17:17.107 15:25:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:17:17.107 15:25:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:17:17.107 15:25:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:17:17.107 15:25:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:17:17.107 15:25:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # local verify=true 00:17:17.107 15:25:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:17:17.107 15:25:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:17.107 15:25:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:17:17.107 15:25:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:17.107 15:25:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:17.107 15:25:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:17:17.107 15:25:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:17.107 15:25:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:17.107 15:25:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:17.107 15:25:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:17:17.107 15:25:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:17:17.107 15:25:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # local strip_size 00:17:17.107 15:25:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@577 -- # local create_arg 00:17:17.108 15:25:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:17:17.108 15:25:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@579 -- # local data_offset 00:17:17.108 15:25:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:17:17.108 15:25:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:17:17.108 15:25:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:17:17.108 15:25:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:17:17.108 15:25:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@597 -- # raid_pid=86302 00:17:17.108 15:25:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:17:17.108 15:25:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@598 -- # waitforlisten 86302 00:17:17.108 15:25:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@835 -- # '[' -z 86302 ']' 00:17:17.108 15:25:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:17.108 15:25:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:17.108 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:17.108 15:25:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:17.108 15:25:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:17.108 15:25:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:17.108 I/O size of 3145728 is greater than zero copy threshold (65536). 00:17:17.108 Zero copy mechanism will not be used. 00:17:17.108 [2024-11-20 15:25:03.532467] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:17:17.108 [2024-11-20 15:25:03.532604] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86302 ] 00:17:17.368 [2024-11-20 15:25:03.715830] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:17.368 [2024-11-20 15:25:03.833900] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:17.628 [2024-11-20 15:25:04.041158] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:17.628 [2024-11-20 15:25:04.041216] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:18.198 15:25:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:18.198 15:25:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # return 0 00:17:18.198 15:25:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:18.198 15:25:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1_malloc 00:17:18.198 15:25:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.198 15:25:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:18.198 BaseBdev1_malloc 00:17:18.198 15:25:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.198 15:25:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:18.198 15:25:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.198 15:25:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:18.198 [2024-11-20 15:25:04.434640] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:18.198 [2024-11-20 15:25:04.434735] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:18.198 [2024-11-20 15:25:04.434760] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:18.198 [2024-11-20 15:25:04.434776] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:18.198 [2024-11-20 15:25:04.437272] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:18.198 [2024-11-20 15:25:04.437316] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:18.198 BaseBdev1 00:17:18.198 15:25:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.198 15:25:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:18.198 15:25:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2_malloc 00:17:18.198 15:25:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.198 15:25:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:18.198 BaseBdev2_malloc 00:17:18.198 15:25:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.198 15:25:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:17:18.198 15:25:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.198 15:25:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:18.198 [2024-11-20 15:25:04.493934] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:17:18.198 [2024-11-20 15:25:04.494011] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:18.198 [2024-11-20 15:25:04.494042] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:18.198 [2024-11-20 15:25:04.494057] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:18.198 [2024-11-20 15:25:04.496484] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:18.198 [2024-11-20 15:25:04.496528] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:18.198 BaseBdev2 00:17:18.198 15:25:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.198 15:25:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -b spare_malloc 00:17:18.198 15:25:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.198 15:25:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:18.198 spare_malloc 00:17:18.198 15:25:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.198 15:25:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:17:18.198 15:25:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.198 15:25:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:18.198 spare_delay 00:17:18.198 15:25:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.198 15:25:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:18.198 15:25:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.198 15:25:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:18.198 [2024-11-20 15:25:04.578293] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:18.198 [2024-11-20 15:25:04.578373] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:18.198 [2024-11-20 15:25:04.578398] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:17:18.198 [2024-11-20 15:25:04.578412] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:18.198 [2024-11-20 15:25:04.580878] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:18.198 [2024-11-20 15:25:04.580923] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:18.198 spare 00:17:18.198 15:25:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.198 15:25:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:17:18.198 15:25:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.198 15:25:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:18.198 [2024-11-20 15:25:04.590346] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:18.198 [2024-11-20 15:25:04.592470] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:18.198 [2024-11-20 15:25:04.592686] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:18.198 [2024-11-20 15:25:04.592704] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:18.198 [2024-11-20 15:25:04.592997] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:17:18.198 [2024-11-20 15:25:04.593173] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:18.198 [2024-11-20 15:25:04.593193] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:18.198 [2024-11-20 15:25:04.593375] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:18.198 15:25:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.198 15:25:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:18.198 15:25:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:18.198 15:25:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:18.198 15:25:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:18.198 15:25:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:18.198 15:25:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:18.198 15:25:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:18.198 15:25:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:18.198 15:25:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:18.198 15:25:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:18.198 15:25:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:18.198 15:25:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.198 15:25:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:18.198 15:25:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:18.198 15:25:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.198 15:25:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:18.198 "name": "raid_bdev1", 00:17:18.198 "uuid": "25132a14-b152-4444-af1a-cfd434c99aa4", 00:17:18.198 "strip_size_kb": 0, 00:17:18.198 "state": "online", 00:17:18.198 "raid_level": "raid1", 00:17:18.198 "superblock": true, 00:17:18.198 "num_base_bdevs": 2, 00:17:18.198 "num_base_bdevs_discovered": 2, 00:17:18.198 "num_base_bdevs_operational": 2, 00:17:18.198 "base_bdevs_list": [ 00:17:18.198 { 00:17:18.198 "name": "BaseBdev1", 00:17:18.198 "uuid": "56112372-30a8-5738-ba61-0225e0569cbe", 00:17:18.198 "is_configured": true, 00:17:18.198 "data_offset": 256, 00:17:18.198 "data_size": 7936 00:17:18.198 }, 00:17:18.198 { 00:17:18.198 "name": "BaseBdev2", 00:17:18.198 "uuid": "254f3114-bf04-5616-873f-cf2dc6728c9b", 00:17:18.198 "is_configured": true, 00:17:18.198 "data_offset": 256, 00:17:18.198 "data_size": 7936 00:17:18.198 } 00:17:18.198 ] 00:17:18.198 }' 00:17:18.198 15:25:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:18.198 15:25:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:18.784 15:25:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:17:18.784 15:25:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:18.784 15:25:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.784 15:25:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:18.784 [2024-11-20 15:25:04.970110] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:18.784 15:25:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.784 15:25:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:17:18.784 15:25:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:18.784 15:25:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.784 15:25:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:18.784 15:25:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:17:18.784 15:25:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.784 15:25:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:17:18.784 15:25:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:17:18.784 15:25:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:17:18.784 15:25:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:17:18.784 15:25:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:17:18.784 15:25:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:18.784 15:25:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:17:18.784 15:25:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:18.784 15:25:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:17:18.784 15:25:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:18.784 15:25:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:17:18.784 15:25:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:18.784 15:25:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:18.784 15:25:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:17:18.784 [2024-11-20 15:25:05.233572] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:18.784 /dev/nbd0 00:17:19.044 15:25:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:19.044 15:25:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:19.044 15:25:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:19.044 15:25:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:17:19.045 15:25:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:19.045 15:25:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:19.045 15:25:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:19.045 15:25:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:17:19.045 15:25:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:19.045 15:25:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:19.045 15:25:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:19.045 1+0 records in 00:17:19.045 1+0 records out 00:17:19.045 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000423421 s, 9.7 MB/s 00:17:19.045 15:25:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:19.045 15:25:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:17:19.045 15:25:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:19.045 15:25:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:19.045 15:25:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:17:19.045 15:25:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:19.045 15:25:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:19.045 15:25:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:17:19.045 15:25:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:17:19.045 15:25:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:17:19.613 7936+0 records in 00:17:19.613 7936+0 records out 00:17:19.613 32505856 bytes (33 MB, 31 MiB) copied, 0.71529 s, 45.4 MB/s 00:17:19.613 15:25:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:17:19.613 15:25:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:19.613 15:25:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:19.613 15:25:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:19.613 15:25:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:17:19.613 15:25:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:19.613 15:25:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:19.872 15:25:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:19.872 [2024-11-20 15:25:06.241150] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:19.872 15:25:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:19.872 15:25:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:19.872 15:25:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:19.872 15:25:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:19.872 15:25:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:19.872 15:25:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:17:19.872 15:25:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:17:19.872 15:25:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:17:19.872 15:25:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.872 15:25:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:19.872 [2024-11-20 15:25:06.262507] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:19.872 15:25:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.872 15:25:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:19.872 15:25:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:19.872 15:25:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:19.872 15:25:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:19.872 15:25:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:19.872 15:25:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:19.872 15:25:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:19.872 15:25:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:19.872 15:25:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:19.872 15:25:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:19.872 15:25:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:19.872 15:25:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:19.873 15:25:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.873 15:25:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:19.873 15:25:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.873 15:25:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:19.873 "name": "raid_bdev1", 00:17:19.873 "uuid": "25132a14-b152-4444-af1a-cfd434c99aa4", 00:17:19.873 "strip_size_kb": 0, 00:17:19.873 "state": "online", 00:17:19.873 "raid_level": "raid1", 00:17:19.873 "superblock": true, 00:17:19.873 "num_base_bdevs": 2, 00:17:19.873 "num_base_bdevs_discovered": 1, 00:17:19.873 "num_base_bdevs_operational": 1, 00:17:19.873 "base_bdevs_list": [ 00:17:19.873 { 00:17:19.873 "name": null, 00:17:19.873 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:19.873 "is_configured": false, 00:17:19.873 "data_offset": 0, 00:17:19.873 "data_size": 7936 00:17:19.873 }, 00:17:19.873 { 00:17:19.873 "name": "BaseBdev2", 00:17:19.873 "uuid": "254f3114-bf04-5616-873f-cf2dc6728c9b", 00:17:19.873 "is_configured": true, 00:17:19.873 "data_offset": 256, 00:17:19.873 "data_size": 7936 00:17:19.873 } 00:17:19.873 ] 00:17:19.873 }' 00:17:19.873 15:25:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:19.873 15:25:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:20.441 15:25:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:20.441 15:25:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.441 15:25:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:20.441 [2024-11-20 15:25:06.665945] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:20.441 [2024-11-20 15:25:06.684701] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:17:20.441 15:25:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.441 15:25:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@647 -- # sleep 1 00:17:20.441 [2024-11-20 15:25:06.686864] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:21.378 15:25:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:21.378 15:25:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:21.378 15:25:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:21.378 15:25:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:21.378 15:25:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:21.378 15:25:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:21.378 15:25:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:21.378 15:25:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.378 15:25:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:21.378 15:25:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.378 15:25:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:21.378 "name": "raid_bdev1", 00:17:21.378 "uuid": "25132a14-b152-4444-af1a-cfd434c99aa4", 00:17:21.378 "strip_size_kb": 0, 00:17:21.378 "state": "online", 00:17:21.378 "raid_level": "raid1", 00:17:21.378 "superblock": true, 00:17:21.378 "num_base_bdevs": 2, 00:17:21.378 "num_base_bdevs_discovered": 2, 00:17:21.378 "num_base_bdevs_operational": 2, 00:17:21.378 "process": { 00:17:21.378 "type": "rebuild", 00:17:21.378 "target": "spare", 00:17:21.378 "progress": { 00:17:21.378 "blocks": 2560, 00:17:21.378 "percent": 32 00:17:21.378 } 00:17:21.378 }, 00:17:21.378 "base_bdevs_list": [ 00:17:21.378 { 00:17:21.378 "name": "spare", 00:17:21.378 "uuid": "7d58be47-53c2-5afa-8ef5-9caa5c3dc795", 00:17:21.378 "is_configured": true, 00:17:21.379 "data_offset": 256, 00:17:21.379 "data_size": 7936 00:17:21.379 }, 00:17:21.379 { 00:17:21.379 "name": "BaseBdev2", 00:17:21.379 "uuid": "254f3114-bf04-5616-873f-cf2dc6728c9b", 00:17:21.379 "is_configured": true, 00:17:21.379 "data_offset": 256, 00:17:21.379 "data_size": 7936 00:17:21.379 } 00:17:21.379 ] 00:17:21.379 }' 00:17:21.379 15:25:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:21.379 15:25:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:21.379 15:25:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:21.379 15:25:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:21.379 15:25:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:21.379 15:25:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.379 15:25:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:21.379 [2024-11-20 15:25:07.822892] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:21.637 [2024-11-20 15:25:07.892499] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:21.637 [2024-11-20 15:25:07.892579] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:21.637 [2024-11-20 15:25:07.892597] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:21.637 [2024-11-20 15:25:07.892609] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:21.637 15:25:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.637 15:25:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:21.637 15:25:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:21.637 15:25:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:21.637 15:25:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:21.637 15:25:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:21.637 15:25:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:21.637 15:25:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:21.637 15:25:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:21.637 15:25:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:21.637 15:25:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:21.637 15:25:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:21.637 15:25:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:21.637 15:25:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.637 15:25:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:21.637 15:25:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.637 15:25:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:21.637 "name": "raid_bdev1", 00:17:21.637 "uuid": "25132a14-b152-4444-af1a-cfd434c99aa4", 00:17:21.637 "strip_size_kb": 0, 00:17:21.637 "state": "online", 00:17:21.637 "raid_level": "raid1", 00:17:21.637 "superblock": true, 00:17:21.637 "num_base_bdevs": 2, 00:17:21.637 "num_base_bdevs_discovered": 1, 00:17:21.637 "num_base_bdevs_operational": 1, 00:17:21.637 "base_bdevs_list": [ 00:17:21.637 { 00:17:21.637 "name": null, 00:17:21.637 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:21.637 "is_configured": false, 00:17:21.637 "data_offset": 0, 00:17:21.637 "data_size": 7936 00:17:21.637 }, 00:17:21.637 { 00:17:21.637 "name": "BaseBdev2", 00:17:21.637 "uuid": "254f3114-bf04-5616-873f-cf2dc6728c9b", 00:17:21.637 "is_configured": true, 00:17:21.637 "data_offset": 256, 00:17:21.637 "data_size": 7936 00:17:21.637 } 00:17:21.637 ] 00:17:21.637 }' 00:17:21.637 15:25:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:21.637 15:25:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:21.896 15:25:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:21.896 15:25:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:21.896 15:25:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:21.896 15:25:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:21.896 15:25:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:21.896 15:25:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:21.896 15:25:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:21.896 15:25:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.896 15:25:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:21.896 15:25:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.155 15:25:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:22.155 "name": "raid_bdev1", 00:17:22.155 "uuid": "25132a14-b152-4444-af1a-cfd434c99aa4", 00:17:22.155 "strip_size_kb": 0, 00:17:22.155 "state": "online", 00:17:22.155 "raid_level": "raid1", 00:17:22.155 "superblock": true, 00:17:22.155 "num_base_bdevs": 2, 00:17:22.155 "num_base_bdevs_discovered": 1, 00:17:22.155 "num_base_bdevs_operational": 1, 00:17:22.155 "base_bdevs_list": [ 00:17:22.155 { 00:17:22.155 "name": null, 00:17:22.155 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:22.155 "is_configured": false, 00:17:22.155 "data_offset": 0, 00:17:22.155 "data_size": 7936 00:17:22.155 }, 00:17:22.155 { 00:17:22.155 "name": "BaseBdev2", 00:17:22.155 "uuid": "254f3114-bf04-5616-873f-cf2dc6728c9b", 00:17:22.155 "is_configured": true, 00:17:22.155 "data_offset": 256, 00:17:22.155 "data_size": 7936 00:17:22.155 } 00:17:22.155 ] 00:17:22.155 }' 00:17:22.155 15:25:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:22.155 15:25:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:22.155 15:25:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:22.155 15:25:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:22.155 15:25:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:22.155 15:25:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.155 15:25:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:22.155 [2024-11-20 15:25:08.470872] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:22.155 [2024-11-20 15:25:08.487886] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:17:22.155 15:25:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.155 15:25:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@663 -- # sleep 1 00:17:22.155 [2024-11-20 15:25:08.490082] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:23.090 15:25:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:23.090 15:25:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:23.090 15:25:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:23.090 15:25:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:23.090 15:25:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:23.090 15:25:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:23.090 15:25:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.090 15:25:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:23.090 15:25:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:23.090 15:25:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.090 15:25:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:23.090 "name": "raid_bdev1", 00:17:23.090 "uuid": "25132a14-b152-4444-af1a-cfd434c99aa4", 00:17:23.090 "strip_size_kb": 0, 00:17:23.090 "state": "online", 00:17:23.090 "raid_level": "raid1", 00:17:23.090 "superblock": true, 00:17:23.090 "num_base_bdevs": 2, 00:17:23.090 "num_base_bdevs_discovered": 2, 00:17:23.090 "num_base_bdevs_operational": 2, 00:17:23.090 "process": { 00:17:23.090 "type": "rebuild", 00:17:23.090 "target": "spare", 00:17:23.090 "progress": { 00:17:23.090 "blocks": 2560, 00:17:23.090 "percent": 32 00:17:23.090 } 00:17:23.090 }, 00:17:23.090 "base_bdevs_list": [ 00:17:23.090 { 00:17:23.090 "name": "spare", 00:17:23.090 "uuid": "7d58be47-53c2-5afa-8ef5-9caa5c3dc795", 00:17:23.090 "is_configured": true, 00:17:23.090 "data_offset": 256, 00:17:23.090 "data_size": 7936 00:17:23.090 }, 00:17:23.090 { 00:17:23.090 "name": "BaseBdev2", 00:17:23.090 "uuid": "254f3114-bf04-5616-873f-cf2dc6728c9b", 00:17:23.090 "is_configured": true, 00:17:23.090 "data_offset": 256, 00:17:23.090 "data_size": 7936 00:17:23.090 } 00:17:23.090 ] 00:17:23.090 }' 00:17:23.090 15:25:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:23.349 15:25:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:23.349 15:25:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:23.349 15:25:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:23.349 15:25:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:17:23.349 15:25:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:17:23.349 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:17:23.349 15:25:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:17:23.349 15:25:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:17:23.349 15:25:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:17:23.349 15:25:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@706 -- # local timeout=673 00:17:23.349 15:25:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:23.349 15:25:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:23.349 15:25:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:23.349 15:25:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:23.349 15:25:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:23.349 15:25:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:23.349 15:25:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:23.349 15:25:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.349 15:25:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:23.349 15:25:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:23.349 15:25:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.349 15:25:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:23.349 "name": "raid_bdev1", 00:17:23.349 "uuid": "25132a14-b152-4444-af1a-cfd434c99aa4", 00:17:23.349 "strip_size_kb": 0, 00:17:23.349 "state": "online", 00:17:23.349 "raid_level": "raid1", 00:17:23.349 "superblock": true, 00:17:23.349 "num_base_bdevs": 2, 00:17:23.349 "num_base_bdevs_discovered": 2, 00:17:23.349 "num_base_bdevs_operational": 2, 00:17:23.349 "process": { 00:17:23.349 "type": "rebuild", 00:17:23.349 "target": "spare", 00:17:23.349 "progress": { 00:17:23.349 "blocks": 2816, 00:17:23.349 "percent": 35 00:17:23.349 } 00:17:23.349 }, 00:17:23.349 "base_bdevs_list": [ 00:17:23.349 { 00:17:23.349 "name": "spare", 00:17:23.349 "uuid": "7d58be47-53c2-5afa-8ef5-9caa5c3dc795", 00:17:23.349 "is_configured": true, 00:17:23.349 "data_offset": 256, 00:17:23.349 "data_size": 7936 00:17:23.349 }, 00:17:23.349 { 00:17:23.349 "name": "BaseBdev2", 00:17:23.349 "uuid": "254f3114-bf04-5616-873f-cf2dc6728c9b", 00:17:23.349 "is_configured": true, 00:17:23.349 "data_offset": 256, 00:17:23.349 "data_size": 7936 00:17:23.349 } 00:17:23.349 ] 00:17:23.349 }' 00:17:23.349 15:25:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:23.349 15:25:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:23.349 15:25:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:23.349 15:25:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:23.349 15:25:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:24.287 15:25:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:24.287 15:25:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:24.287 15:25:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:24.287 15:25:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:24.287 15:25:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:24.287 15:25:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:24.287 15:25:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:24.287 15:25:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:24.287 15:25:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.287 15:25:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:24.546 15:25:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.546 15:25:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:24.546 "name": "raid_bdev1", 00:17:24.546 "uuid": "25132a14-b152-4444-af1a-cfd434c99aa4", 00:17:24.546 "strip_size_kb": 0, 00:17:24.546 "state": "online", 00:17:24.546 "raid_level": "raid1", 00:17:24.546 "superblock": true, 00:17:24.546 "num_base_bdevs": 2, 00:17:24.546 "num_base_bdevs_discovered": 2, 00:17:24.546 "num_base_bdevs_operational": 2, 00:17:24.546 "process": { 00:17:24.546 "type": "rebuild", 00:17:24.546 "target": "spare", 00:17:24.546 "progress": { 00:17:24.546 "blocks": 5632, 00:17:24.546 "percent": 70 00:17:24.546 } 00:17:24.546 }, 00:17:24.546 "base_bdevs_list": [ 00:17:24.546 { 00:17:24.546 "name": "spare", 00:17:24.546 "uuid": "7d58be47-53c2-5afa-8ef5-9caa5c3dc795", 00:17:24.546 "is_configured": true, 00:17:24.546 "data_offset": 256, 00:17:24.546 "data_size": 7936 00:17:24.546 }, 00:17:24.546 { 00:17:24.546 "name": "BaseBdev2", 00:17:24.546 "uuid": "254f3114-bf04-5616-873f-cf2dc6728c9b", 00:17:24.546 "is_configured": true, 00:17:24.546 "data_offset": 256, 00:17:24.546 "data_size": 7936 00:17:24.546 } 00:17:24.546 ] 00:17:24.546 }' 00:17:24.546 15:25:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:24.546 15:25:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:24.546 15:25:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:24.546 15:25:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:24.546 15:25:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:25.484 [2024-11-20 15:25:11.604347] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:17:25.484 [2024-11-20 15:25:11.604443] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:17:25.484 [2024-11-20 15:25:11.604561] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:25.484 15:25:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:25.484 15:25:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:25.484 15:25:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:25.484 15:25:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:25.484 15:25:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:25.484 15:25:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:25.484 15:25:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:25.484 15:25:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:25.484 15:25:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.484 15:25:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:25.484 15:25:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.484 15:25:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:25.484 "name": "raid_bdev1", 00:17:25.484 "uuid": "25132a14-b152-4444-af1a-cfd434c99aa4", 00:17:25.484 "strip_size_kb": 0, 00:17:25.484 "state": "online", 00:17:25.484 "raid_level": "raid1", 00:17:25.484 "superblock": true, 00:17:25.484 "num_base_bdevs": 2, 00:17:25.484 "num_base_bdevs_discovered": 2, 00:17:25.484 "num_base_bdevs_operational": 2, 00:17:25.484 "base_bdevs_list": [ 00:17:25.484 { 00:17:25.484 "name": "spare", 00:17:25.484 "uuid": "7d58be47-53c2-5afa-8ef5-9caa5c3dc795", 00:17:25.484 "is_configured": true, 00:17:25.484 "data_offset": 256, 00:17:25.484 "data_size": 7936 00:17:25.484 }, 00:17:25.484 { 00:17:25.484 "name": "BaseBdev2", 00:17:25.484 "uuid": "254f3114-bf04-5616-873f-cf2dc6728c9b", 00:17:25.484 "is_configured": true, 00:17:25.484 "data_offset": 256, 00:17:25.484 "data_size": 7936 00:17:25.484 } 00:17:25.484 ] 00:17:25.484 }' 00:17:25.484 15:25:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:25.743 15:25:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:17:25.743 15:25:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:25.743 15:25:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:17:25.743 15:25:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@709 -- # break 00:17:25.743 15:25:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:25.743 15:25:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:25.743 15:25:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:25.743 15:25:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:25.743 15:25:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:25.743 15:25:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:25.743 15:25:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:25.743 15:25:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.743 15:25:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:25.743 15:25:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.743 15:25:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:25.743 "name": "raid_bdev1", 00:17:25.743 "uuid": "25132a14-b152-4444-af1a-cfd434c99aa4", 00:17:25.743 "strip_size_kb": 0, 00:17:25.743 "state": "online", 00:17:25.743 "raid_level": "raid1", 00:17:25.743 "superblock": true, 00:17:25.743 "num_base_bdevs": 2, 00:17:25.743 "num_base_bdevs_discovered": 2, 00:17:25.743 "num_base_bdevs_operational": 2, 00:17:25.743 "base_bdevs_list": [ 00:17:25.743 { 00:17:25.743 "name": "spare", 00:17:25.743 "uuid": "7d58be47-53c2-5afa-8ef5-9caa5c3dc795", 00:17:25.743 "is_configured": true, 00:17:25.743 "data_offset": 256, 00:17:25.743 "data_size": 7936 00:17:25.743 }, 00:17:25.743 { 00:17:25.743 "name": "BaseBdev2", 00:17:25.743 "uuid": "254f3114-bf04-5616-873f-cf2dc6728c9b", 00:17:25.743 "is_configured": true, 00:17:25.743 "data_offset": 256, 00:17:25.743 "data_size": 7936 00:17:25.743 } 00:17:25.743 ] 00:17:25.743 }' 00:17:25.743 15:25:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:25.743 15:25:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:25.743 15:25:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:25.743 15:25:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:25.743 15:25:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:25.743 15:25:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:25.743 15:25:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:25.743 15:25:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:25.743 15:25:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:25.743 15:25:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:25.743 15:25:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:25.743 15:25:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:25.743 15:25:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:25.743 15:25:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:25.743 15:25:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:25.743 15:25:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.743 15:25:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:25.744 15:25:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:25.744 15:25:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.744 15:25:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:25.744 "name": "raid_bdev1", 00:17:25.744 "uuid": "25132a14-b152-4444-af1a-cfd434c99aa4", 00:17:25.744 "strip_size_kb": 0, 00:17:25.744 "state": "online", 00:17:25.744 "raid_level": "raid1", 00:17:25.744 "superblock": true, 00:17:25.744 "num_base_bdevs": 2, 00:17:25.744 "num_base_bdevs_discovered": 2, 00:17:25.744 "num_base_bdevs_operational": 2, 00:17:25.744 "base_bdevs_list": [ 00:17:25.744 { 00:17:25.744 "name": "spare", 00:17:25.744 "uuid": "7d58be47-53c2-5afa-8ef5-9caa5c3dc795", 00:17:25.744 "is_configured": true, 00:17:25.744 "data_offset": 256, 00:17:25.744 "data_size": 7936 00:17:25.744 }, 00:17:25.744 { 00:17:25.744 "name": "BaseBdev2", 00:17:25.744 "uuid": "254f3114-bf04-5616-873f-cf2dc6728c9b", 00:17:25.744 "is_configured": true, 00:17:25.744 "data_offset": 256, 00:17:25.744 "data_size": 7936 00:17:25.744 } 00:17:25.744 ] 00:17:25.744 }' 00:17:25.744 15:25:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:25.744 15:25:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:26.312 15:25:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:26.312 15:25:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.312 15:25:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:26.312 [2024-11-20 15:25:12.588296] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:26.312 [2024-11-20 15:25:12.588341] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:26.312 [2024-11-20 15:25:12.588431] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:26.312 [2024-11-20 15:25:12.588502] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:26.312 [2024-11-20 15:25:12.588517] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:26.312 15:25:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.312 15:25:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:26.312 15:25:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.312 15:25:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # jq length 00:17:26.312 15:25:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:26.312 15:25:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.312 15:25:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:17:26.312 15:25:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:17:26.312 15:25:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:17:26.312 15:25:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:17:26.312 15:25:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:26.312 15:25:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:17:26.312 15:25:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:26.313 15:25:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:26.313 15:25:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:26.313 15:25:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:17:26.313 15:25:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:26.313 15:25:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:26.313 15:25:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:17:26.572 /dev/nbd0 00:17:26.572 15:25:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:26.572 15:25:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:26.572 15:25:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:26.572 15:25:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:17:26.572 15:25:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:26.572 15:25:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:26.572 15:25:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:26.572 15:25:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:17:26.572 15:25:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:26.572 15:25:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:26.572 15:25:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:26.572 1+0 records in 00:17:26.572 1+0 records out 00:17:26.572 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000383883 s, 10.7 MB/s 00:17:26.572 15:25:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:26.572 15:25:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:17:26.572 15:25:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:26.572 15:25:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:26.572 15:25:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:17:26.572 15:25:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:26.572 15:25:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:26.572 15:25:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:17:26.832 /dev/nbd1 00:17:26.832 15:25:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:26.832 15:25:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:26.832 15:25:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:17:26.832 15:25:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:17:26.832 15:25:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:26.832 15:25:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:26.832 15:25:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:17:26.832 15:25:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:17:26.833 15:25:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:26.833 15:25:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:26.833 15:25:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:26.833 1+0 records in 00:17:26.833 1+0 records out 00:17:26.833 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000431379 s, 9.5 MB/s 00:17:26.833 15:25:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:26.833 15:25:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:17:26.833 15:25:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:26.833 15:25:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:26.833 15:25:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:17:26.833 15:25:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:26.833 15:25:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:26.833 15:25:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:17:27.091 15:25:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:17:27.091 15:25:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:27.091 15:25:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:27.091 15:25:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:27.091 15:25:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:17:27.091 15:25:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:27.091 15:25:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:27.349 15:25:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:27.349 15:25:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:27.349 15:25:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:27.349 15:25:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:27.349 15:25:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:27.349 15:25:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:27.349 15:25:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:17:27.349 15:25:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:17:27.349 15:25:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:27.349 15:25:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:17:27.608 15:25:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:27.608 15:25:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:27.608 15:25:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:27.608 15:25:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:27.608 15:25:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:27.608 15:25:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:27.608 15:25:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:17:27.608 15:25:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:17:27.608 15:25:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:17:27.608 15:25:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:17:27.608 15:25:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.608 15:25:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:27.608 15:25:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.608 15:25:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:27.608 15:25:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.608 15:25:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:27.608 [2024-11-20 15:25:13.872314] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:27.608 [2024-11-20 15:25:13.872387] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:27.608 [2024-11-20 15:25:13.872416] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:17:27.608 [2024-11-20 15:25:13.872428] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:27.608 [2024-11-20 15:25:13.874974] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:27.608 [2024-11-20 15:25:13.875014] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:27.608 [2024-11-20 15:25:13.875115] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:27.608 [2024-11-20 15:25:13.875165] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:27.608 [2024-11-20 15:25:13.875327] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:27.608 spare 00:17:27.608 15:25:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.608 15:25:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:17:27.608 15:25:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.608 15:25:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:27.608 [2024-11-20 15:25:13.975261] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:17:27.608 [2024-11-20 15:25:13.975333] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:27.608 [2024-11-20 15:25:13.975694] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:17:27.608 [2024-11-20 15:25:13.975910] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:17:27.608 [2024-11-20 15:25:13.975930] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:17:27.608 [2024-11-20 15:25:13.976163] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:27.608 15:25:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.608 15:25:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:27.608 15:25:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:27.608 15:25:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:27.608 15:25:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:27.608 15:25:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:27.608 15:25:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:27.608 15:25:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:27.608 15:25:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:27.608 15:25:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:27.608 15:25:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:27.608 15:25:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:27.608 15:25:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:27.608 15:25:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.608 15:25:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:27.608 15:25:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.608 15:25:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:27.608 "name": "raid_bdev1", 00:17:27.608 "uuid": "25132a14-b152-4444-af1a-cfd434c99aa4", 00:17:27.608 "strip_size_kb": 0, 00:17:27.608 "state": "online", 00:17:27.608 "raid_level": "raid1", 00:17:27.608 "superblock": true, 00:17:27.608 "num_base_bdevs": 2, 00:17:27.608 "num_base_bdevs_discovered": 2, 00:17:27.608 "num_base_bdevs_operational": 2, 00:17:27.608 "base_bdevs_list": [ 00:17:27.608 { 00:17:27.608 "name": "spare", 00:17:27.608 "uuid": "7d58be47-53c2-5afa-8ef5-9caa5c3dc795", 00:17:27.608 "is_configured": true, 00:17:27.608 "data_offset": 256, 00:17:27.608 "data_size": 7936 00:17:27.608 }, 00:17:27.608 { 00:17:27.608 "name": "BaseBdev2", 00:17:27.608 "uuid": "254f3114-bf04-5616-873f-cf2dc6728c9b", 00:17:27.608 "is_configured": true, 00:17:27.608 "data_offset": 256, 00:17:27.608 "data_size": 7936 00:17:27.608 } 00:17:27.608 ] 00:17:27.608 }' 00:17:27.608 15:25:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:27.608 15:25:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:28.175 15:25:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:28.175 15:25:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:28.175 15:25:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:28.175 15:25:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:28.175 15:25:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:28.175 15:25:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:28.175 15:25:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:28.175 15:25:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.175 15:25:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:28.175 15:25:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.175 15:25:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:28.175 "name": "raid_bdev1", 00:17:28.175 "uuid": "25132a14-b152-4444-af1a-cfd434c99aa4", 00:17:28.175 "strip_size_kb": 0, 00:17:28.175 "state": "online", 00:17:28.175 "raid_level": "raid1", 00:17:28.175 "superblock": true, 00:17:28.175 "num_base_bdevs": 2, 00:17:28.175 "num_base_bdevs_discovered": 2, 00:17:28.175 "num_base_bdevs_operational": 2, 00:17:28.175 "base_bdevs_list": [ 00:17:28.175 { 00:17:28.175 "name": "spare", 00:17:28.175 "uuid": "7d58be47-53c2-5afa-8ef5-9caa5c3dc795", 00:17:28.175 "is_configured": true, 00:17:28.175 "data_offset": 256, 00:17:28.175 "data_size": 7936 00:17:28.175 }, 00:17:28.175 { 00:17:28.175 "name": "BaseBdev2", 00:17:28.175 "uuid": "254f3114-bf04-5616-873f-cf2dc6728c9b", 00:17:28.175 "is_configured": true, 00:17:28.175 "data_offset": 256, 00:17:28.175 "data_size": 7936 00:17:28.175 } 00:17:28.175 ] 00:17:28.175 }' 00:17:28.175 15:25:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:28.175 15:25:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:28.175 15:25:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:28.175 15:25:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:28.175 15:25:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:28.175 15:25:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:17:28.175 15:25:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.175 15:25:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:28.175 15:25:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.175 15:25:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:17:28.175 15:25:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:28.175 15:25:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.175 15:25:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:28.175 [2024-11-20 15:25:14.603784] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:28.175 15:25:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.175 15:25:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:28.175 15:25:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:28.175 15:25:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:28.175 15:25:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:28.175 15:25:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:28.175 15:25:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:28.175 15:25:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:28.175 15:25:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:28.175 15:25:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:28.175 15:25:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:28.175 15:25:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:28.175 15:25:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.175 15:25:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:28.175 15:25:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:28.175 15:25:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.490 15:25:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:28.490 "name": "raid_bdev1", 00:17:28.490 "uuid": "25132a14-b152-4444-af1a-cfd434c99aa4", 00:17:28.490 "strip_size_kb": 0, 00:17:28.490 "state": "online", 00:17:28.490 "raid_level": "raid1", 00:17:28.490 "superblock": true, 00:17:28.490 "num_base_bdevs": 2, 00:17:28.490 "num_base_bdevs_discovered": 1, 00:17:28.490 "num_base_bdevs_operational": 1, 00:17:28.490 "base_bdevs_list": [ 00:17:28.490 { 00:17:28.490 "name": null, 00:17:28.490 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:28.490 "is_configured": false, 00:17:28.490 "data_offset": 0, 00:17:28.490 "data_size": 7936 00:17:28.490 }, 00:17:28.490 { 00:17:28.490 "name": "BaseBdev2", 00:17:28.490 "uuid": "254f3114-bf04-5616-873f-cf2dc6728c9b", 00:17:28.490 "is_configured": true, 00:17:28.490 "data_offset": 256, 00:17:28.490 "data_size": 7936 00:17:28.490 } 00:17:28.490 ] 00:17:28.490 }' 00:17:28.490 15:25:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:28.490 15:25:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:28.748 15:25:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:28.748 15:25:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.748 15:25:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:28.748 [2024-11-20 15:25:15.059145] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:28.748 [2024-11-20 15:25:15.059363] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:28.748 [2024-11-20 15:25:15.059383] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:28.748 [2024-11-20 15:25:15.059421] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:28.748 [2024-11-20 15:25:15.075845] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:17:28.748 15:25:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.748 15:25:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@757 -- # sleep 1 00:17:28.748 [2024-11-20 15:25:15.078007] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:29.685 15:25:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:29.685 15:25:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:29.685 15:25:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:29.685 15:25:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:29.685 15:25:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:29.685 15:25:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:29.685 15:25:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.685 15:25:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:29.685 15:25:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:29.685 15:25:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.685 15:25:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:29.685 "name": "raid_bdev1", 00:17:29.685 "uuid": "25132a14-b152-4444-af1a-cfd434c99aa4", 00:17:29.685 "strip_size_kb": 0, 00:17:29.685 "state": "online", 00:17:29.685 "raid_level": "raid1", 00:17:29.685 "superblock": true, 00:17:29.685 "num_base_bdevs": 2, 00:17:29.685 "num_base_bdevs_discovered": 2, 00:17:29.685 "num_base_bdevs_operational": 2, 00:17:29.685 "process": { 00:17:29.685 "type": "rebuild", 00:17:29.685 "target": "spare", 00:17:29.685 "progress": { 00:17:29.685 "blocks": 2560, 00:17:29.685 "percent": 32 00:17:29.685 } 00:17:29.685 }, 00:17:29.685 "base_bdevs_list": [ 00:17:29.685 { 00:17:29.685 "name": "spare", 00:17:29.685 "uuid": "7d58be47-53c2-5afa-8ef5-9caa5c3dc795", 00:17:29.685 "is_configured": true, 00:17:29.685 "data_offset": 256, 00:17:29.685 "data_size": 7936 00:17:29.685 }, 00:17:29.685 { 00:17:29.685 "name": "BaseBdev2", 00:17:29.685 "uuid": "254f3114-bf04-5616-873f-cf2dc6728c9b", 00:17:29.685 "is_configured": true, 00:17:29.685 "data_offset": 256, 00:17:29.685 "data_size": 7936 00:17:29.685 } 00:17:29.685 ] 00:17:29.685 }' 00:17:29.685 15:25:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:29.945 15:25:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:29.945 15:25:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:29.945 15:25:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:29.945 15:25:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:17:29.945 15:25:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.945 15:25:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:29.945 [2024-11-20 15:25:16.226019] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:29.945 [2024-11-20 15:25:16.283576] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:29.945 [2024-11-20 15:25:16.283684] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:29.945 [2024-11-20 15:25:16.283702] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:29.945 [2024-11-20 15:25:16.283714] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:29.945 15:25:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.945 15:25:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:29.945 15:25:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:29.945 15:25:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:29.945 15:25:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:29.945 15:25:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:29.945 15:25:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:29.945 15:25:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:29.945 15:25:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:29.945 15:25:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:29.945 15:25:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:29.945 15:25:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:29.946 15:25:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.946 15:25:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:29.946 15:25:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:29.946 15:25:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.946 15:25:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:29.946 "name": "raid_bdev1", 00:17:29.946 "uuid": "25132a14-b152-4444-af1a-cfd434c99aa4", 00:17:29.946 "strip_size_kb": 0, 00:17:29.946 "state": "online", 00:17:29.946 "raid_level": "raid1", 00:17:29.946 "superblock": true, 00:17:29.946 "num_base_bdevs": 2, 00:17:29.946 "num_base_bdevs_discovered": 1, 00:17:29.946 "num_base_bdevs_operational": 1, 00:17:29.946 "base_bdevs_list": [ 00:17:29.946 { 00:17:29.946 "name": null, 00:17:29.946 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:29.946 "is_configured": false, 00:17:29.946 "data_offset": 0, 00:17:29.946 "data_size": 7936 00:17:29.946 }, 00:17:29.946 { 00:17:29.946 "name": "BaseBdev2", 00:17:29.946 "uuid": "254f3114-bf04-5616-873f-cf2dc6728c9b", 00:17:29.946 "is_configured": true, 00:17:29.946 "data_offset": 256, 00:17:29.946 "data_size": 7936 00:17:29.946 } 00:17:29.946 ] 00:17:29.946 }' 00:17:29.946 15:25:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:29.946 15:25:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:30.553 15:25:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:30.553 15:25:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.553 15:25:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:30.553 [2024-11-20 15:25:16.763845] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:30.553 [2024-11-20 15:25:16.763919] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:30.553 [2024-11-20 15:25:16.763944] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:17:30.553 [2024-11-20 15:25:16.763958] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:30.553 [2024-11-20 15:25:16.764422] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:30.553 [2024-11-20 15:25:16.764446] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:30.553 [2024-11-20 15:25:16.764540] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:30.553 [2024-11-20 15:25:16.764557] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:30.553 [2024-11-20 15:25:16.764572] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:30.553 [2024-11-20 15:25:16.764595] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:30.553 [2024-11-20 15:25:16.780795] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:17:30.553 spare 00:17:30.553 15:25:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.553 15:25:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@764 -- # sleep 1 00:17:30.553 [2024-11-20 15:25:16.782979] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:31.490 15:25:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:31.490 15:25:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:31.490 15:25:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:31.490 15:25:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:31.490 15:25:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:31.490 15:25:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:31.490 15:25:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.490 15:25:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:31.490 15:25:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:31.490 15:25:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.490 15:25:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:31.490 "name": "raid_bdev1", 00:17:31.490 "uuid": "25132a14-b152-4444-af1a-cfd434c99aa4", 00:17:31.490 "strip_size_kb": 0, 00:17:31.490 "state": "online", 00:17:31.490 "raid_level": "raid1", 00:17:31.490 "superblock": true, 00:17:31.490 "num_base_bdevs": 2, 00:17:31.490 "num_base_bdevs_discovered": 2, 00:17:31.490 "num_base_bdevs_operational": 2, 00:17:31.490 "process": { 00:17:31.490 "type": "rebuild", 00:17:31.490 "target": "spare", 00:17:31.490 "progress": { 00:17:31.490 "blocks": 2560, 00:17:31.490 "percent": 32 00:17:31.490 } 00:17:31.490 }, 00:17:31.490 "base_bdevs_list": [ 00:17:31.490 { 00:17:31.490 "name": "spare", 00:17:31.490 "uuid": "7d58be47-53c2-5afa-8ef5-9caa5c3dc795", 00:17:31.490 "is_configured": true, 00:17:31.490 "data_offset": 256, 00:17:31.490 "data_size": 7936 00:17:31.490 }, 00:17:31.490 { 00:17:31.490 "name": "BaseBdev2", 00:17:31.490 "uuid": "254f3114-bf04-5616-873f-cf2dc6728c9b", 00:17:31.490 "is_configured": true, 00:17:31.490 "data_offset": 256, 00:17:31.490 "data_size": 7936 00:17:31.490 } 00:17:31.490 ] 00:17:31.490 }' 00:17:31.490 15:25:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:31.490 15:25:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:31.490 15:25:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:31.490 15:25:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:31.490 15:25:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:17:31.490 15:25:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.490 15:25:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:31.490 [2024-11-20 15:25:17.914996] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:31.749 [2024-11-20 15:25:17.988536] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:31.749 [2024-11-20 15:25:17.988624] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:31.749 [2024-11-20 15:25:17.988644] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:31.749 [2024-11-20 15:25:17.988664] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:31.749 15:25:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.749 15:25:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:31.749 15:25:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:31.749 15:25:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:31.749 15:25:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:31.749 15:25:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:31.749 15:25:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:31.749 15:25:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:31.749 15:25:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:31.749 15:25:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:31.749 15:25:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:31.749 15:25:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:31.749 15:25:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.749 15:25:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:31.749 15:25:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:31.749 15:25:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.749 15:25:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:31.749 "name": "raid_bdev1", 00:17:31.749 "uuid": "25132a14-b152-4444-af1a-cfd434c99aa4", 00:17:31.749 "strip_size_kb": 0, 00:17:31.749 "state": "online", 00:17:31.749 "raid_level": "raid1", 00:17:31.749 "superblock": true, 00:17:31.749 "num_base_bdevs": 2, 00:17:31.749 "num_base_bdevs_discovered": 1, 00:17:31.749 "num_base_bdevs_operational": 1, 00:17:31.749 "base_bdevs_list": [ 00:17:31.749 { 00:17:31.749 "name": null, 00:17:31.749 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:31.749 "is_configured": false, 00:17:31.749 "data_offset": 0, 00:17:31.749 "data_size": 7936 00:17:31.749 }, 00:17:31.749 { 00:17:31.749 "name": "BaseBdev2", 00:17:31.749 "uuid": "254f3114-bf04-5616-873f-cf2dc6728c9b", 00:17:31.749 "is_configured": true, 00:17:31.749 "data_offset": 256, 00:17:31.749 "data_size": 7936 00:17:31.749 } 00:17:31.749 ] 00:17:31.749 }' 00:17:31.749 15:25:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:31.749 15:25:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:32.008 15:25:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:32.008 15:25:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:32.008 15:25:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:32.008 15:25:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:32.008 15:25:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:32.008 15:25:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:32.008 15:25:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.008 15:25:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:32.008 15:25:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:32.008 15:25:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.008 15:25:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:32.008 "name": "raid_bdev1", 00:17:32.008 "uuid": "25132a14-b152-4444-af1a-cfd434c99aa4", 00:17:32.008 "strip_size_kb": 0, 00:17:32.008 "state": "online", 00:17:32.008 "raid_level": "raid1", 00:17:32.008 "superblock": true, 00:17:32.008 "num_base_bdevs": 2, 00:17:32.008 "num_base_bdevs_discovered": 1, 00:17:32.008 "num_base_bdevs_operational": 1, 00:17:32.008 "base_bdevs_list": [ 00:17:32.008 { 00:17:32.008 "name": null, 00:17:32.008 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:32.008 "is_configured": false, 00:17:32.008 "data_offset": 0, 00:17:32.008 "data_size": 7936 00:17:32.008 }, 00:17:32.008 { 00:17:32.008 "name": "BaseBdev2", 00:17:32.008 "uuid": "254f3114-bf04-5616-873f-cf2dc6728c9b", 00:17:32.008 "is_configured": true, 00:17:32.008 "data_offset": 256, 00:17:32.008 "data_size": 7936 00:17:32.008 } 00:17:32.008 ] 00:17:32.008 }' 00:17:32.008 15:25:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:32.268 15:25:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:32.268 15:25:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:32.268 15:25:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:32.268 15:25:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:17:32.268 15:25:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.268 15:25:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:32.268 15:25:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.268 15:25:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:32.268 15:25:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.268 15:25:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:32.268 [2024-11-20 15:25:18.564788] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:32.268 [2024-11-20 15:25:18.564861] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:32.268 [2024-11-20 15:25:18.564894] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:17:32.268 [2024-11-20 15:25:18.564917] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:32.268 [2024-11-20 15:25:18.565381] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:32.268 [2024-11-20 15:25:18.565401] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:32.268 [2024-11-20 15:25:18.565487] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:17:32.268 [2024-11-20 15:25:18.565502] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:32.268 [2024-11-20 15:25:18.565516] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:32.268 [2024-11-20 15:25:18.565528] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:17:32.268 BaseBdev1 00:17:32.268 15:25:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.268 15:25:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@775 -- # sleep 1 00:17:33.207 15:25:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:33.207 15:25:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:33.207 15:25:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:33.207 15:25:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:33.207 15:25:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:33.207 15:25:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:33.207 15:25:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:33.207 15:25:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:33.207 15:25:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:33.207 15:25:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:33.207 15:25:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:33.207 15:25:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.207 15:25:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:33.207 15:25:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:33.207 15:25:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.207 15:25:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:33.207 "name": "raid_bdev1", 00:17:33.207 "uuid": "25132a14-b152-4444-af1a-cfd434c99aa4", 00:17:33.207 "strip_size_kb": 0, 00:17:33.207 "state": "online", 00:17:33.207 "raid_level": "raid1", 00:17:33.207 "superblock": true, 00:17:33.207 "num_base_bdevs": 2, 00:17:33.207 "num_base_bdevs_discovered": 1, 00:17:33.207 "num_base_bdevs_operational": 1, 00:17:33.207 "base_bdevs_list": [ 00:17:33.207 { 00:17:33.207 "name": null, 00:17:33.207 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:33.207 "is_configured": false, 00:17:33.207 "data_offset": 0, 00:17:33.207 "data_size": 7936 00:17:33.207 }, 00:17:33.207 { 00:17:33.207 "name": "BaseBdev2", 00:17:33.207 "uuid": "254f3114-bf04-5616-873f-cf2dc6728c9b", 00:17:33.207 "is_configured": true, 00:17:33.207 "data_offset": 256, 00:17:33.207 "data_size": 7936 00:17:33.207 } 00:17:33.207 ] 00:17:33.207 }' 00:17:33.207 15:25:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:33.207 15:25:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:33.776 15:25:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:33.776 15:25:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:33.776 15:25:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:33.776 15:25:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:33.776 15:25:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:33.776 15:25:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:33.777 15:25:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:33.777 15:25:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.777 15:25:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:33.777 15:25:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.777 15:25:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:33.777 "name": "raid_bdev1", 00:17:33.777 "uuid": "25132a14-b152-4444-af1a-cfd434c99aa4", 00:17:33.777 "strip_size_kb": 0, 00:17:33.777 "state": "online", 00:17:33.777 "raid_level": "raid1", 00:17:33.777 "superblock": true, 00:17:33.777 "num_base_bdevs": 2, 00:17:33.777 "num_base_bdevs_discovered": 1, 00:17:33.777 "num_base_bdevs_operational": 1, 00:17:33.777 "base_bdevs_list": [ 00:17:33.777 { 00:17:33.777 "name": null, 00:17:33.777 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:33.777 "is_configured": false, 00:17:33.777 "data_offset": 0, 00:17:33.777 "data_size": 7936 00:17:33.777 }, 00:17:33.777 { 00:17:33.777 "name": "BaseBdev2", 00:17:33.777 "uuid": "254f3114-bf04-5616-873f-cf2dc6728c9b", 00:17:33.777 "is_configured": true, 00:17:33.777 "data_offset": 256, 00:17:33.777 "data_size": 7936 00:17:33.777 } 00:17:33.777 ] 00:17:33.777 }' 00:17:33.777 15:25:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:33.777 15:25:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:33.777 15:25:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:33.777 15:25:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:33.777 15:25:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:33.777 15:25:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@652 -- # local es=0 00:17:33.777 15:25:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:33.777 15:25:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:33.777 15:25:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:33.777 15:25:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:33.777 15:25:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:33.777 15:25:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:33.777 15:25:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.777 15:25:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:33.777 [2024-11-20 15:25:20.119711] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:33.777 [2024-11-20 15:25:20.119895] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:33.777 [2024-11-20 15:25:20.119915] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:33.777 request: 00:17:33.777 { 00:17:33.777 "base_bdev": "BaseBdev1", 00:17:33.777 "raid_bdev": "raid_bdev1", 00:17:33.777 "method": "bdev_raid_add_base_bdev", 00:17:33.777 "req_id": 1 00:17:33.777 } 00:17:33.777 Got JSON-RPC error response 00:17:33.777 response: 00:17:33.777 { 00:17:33.777 "code": -22, 00:17:33.777 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:17:33.777 } 00:17:33.777 15:25:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:33.777 15:25:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@655 -- # es=1 00:17:33.777 15:25:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:33.777 15:25:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:33.777 15:25:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:33.777 15:25:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@779 -- # sleep 1 00:17:34.714 15:25:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:34.715 15:25:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:34.715 15:25:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:34.715 15:25:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:34.715 15:25:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:34.715 15:25:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:34.715 15:25:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:34.715 15:25:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:34.715 15:25:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:34.715 15:25:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:34.715 15:25:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:34.715 15:25:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:34.715 15:25:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.715 15:25:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:34.715 15:25:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.715 15:25:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:34.715 "name": "raid_bdev1", 00:17:34.715 "uuid": "25132a14-b152-4444-af1a-cfd434c99aa4", 00:17:34.715 "strip_size_kb": 0, 00:17:34.715 "state": "online", 00:17:34.715 "raid_level": "raid1", 00:17:34.715 "superblock": true, 00:17:34.715 "num_base_bdevs": 2, 00:17:34.715 "num_base_bdevs_discovered": 1, 00:17:34.715 "num_base_bdevs_operational": 1, 00:17:34.715 "base_bdevs_list": [ 00:17:34.715 { 00:17:34.715 "name": null, 00:17:34.715 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:34.715 "is_configured": false, 00:17:34.715 "data_offset": 0, 00:17:34.715 "data_size": 7936 00:17:34.715 }, 00:17:34.715 { 00:17:34.715 "name": "BaseBdev2", 00:17:34.715 "uuid": "254f3114-bf04-5616-873f-cf2dc6728c9b", 00:17:34.715 "is_configured": true, 00:17:34.715 "data_offset": 256, 00:17:34.715 "data_size": 7936 00:17:34.715 } 00:17:34.715 ] 00:17:34.715 }' 00:17:34.715 15:25:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:34.715 15:25:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:35.281 15:25:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:35.281 15:25:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:35.281 15:25:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:35.281 15:25:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:35.281 15:25:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:35.281 15:25:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:35.281 15:25:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.281 15:25:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:35.281 15:25:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:35.281 15:25:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.281 15:25:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:35.282 "name": "raid_bdev1", 00:17:35.282 "uuid": "25132a14-b152-4444-af1a-cfd434c99aa4", 00:17:35.282 "strip_size_kb": 0, 00:17:35.282 "state": "online", 00:17:35.282 "raid_level": "raid1", 00:17:35.282 "superblock": true, 00:17:35.282 "num_base_bdevs": 2, 00:17:35.282 "num_base_bdevs_discovered": 1, 00:17:35.282 "num_base_bdevs_operational": 1, 00:17:35.282 "base_bdevs_list": [ 00:17:35.282 { 00:17:35.282 "name": null, 00:17:35.282 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:35.282 "is_configured": false, 00:17:35.282 "data_offset": 0, 00:17:35.282 "data_size": 7936 00:17:35.282 }, 00:17:35.282 { 00:17:35.282 "name": "BaseBdev2", 00:17:35.282 "uuid": "254f3114-bf04-5616-873f-cf2dc6728c9b", 00:17:35.282 "is_configured": true, 00:17:35.282 "data_offset": 256, 00:17:35.282 "data_size": 7936 00:17:35.282 } 00:17:35.282 ] 00:17:35.282 }' 00:17:35.282 15:25:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:35.282 15:25:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:35.282 15:25:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:35.282 15:25:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:35.282 15:25:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@784 -- # killprocess 86302 00:17:35.282 15:25:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@954 -- # '[' -z 86302 ']' 00:17:35.282 15:25:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@958 -- # kill -0 86302 00:17:35.282 15:25:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@959 -- # uname 00:17:35.282 15:25:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:35.282 15:25:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86302 00:17:35.540 15:25:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:35.540 killing process with pid 86302 00:17:35.540 15:25:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:35.540 15:25:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86302' 00:17:35.540 Received shutdown signal, test time was about 60.000000 seconds 00:17:35.540 00:17:35.540 Latency(us) 00:17:35.540 [2024-11-20T15:25:22.022Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:35.540 [2024-11-20T15:25:22.022Z] =================================================================================================================== 00:17:35.540 [2024-11-20T15:25:22.022Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:35.540 15:25:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@973 -- # kill 86302 00:17:35.540 [2024-11-20 15:25:21.773582] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:35.540 15:25:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@978 -- # wait 86302 00:17:35.540 [2024-11-20 15:25:21.773743] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:35.540 [2024-11-20 15:25:21.773796] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:35.540 [2024-11-20 15:25:21.773810] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:17:35.798 [2024-11-20 15:25:22.080700] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:36.737 15:25:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@786 -- # return 0 00:17:36.737 00:17:36.737 real 0m19.777s 00:17:36.737 user 0m25.515s 00:17:36.737 sys 0m2.890s 00:17:36.737 15:25:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:36.737 ************************************ 00:17:36.737 END TEST raid_rebuild_test_sb_4k 00:17:36.737 ************************************ 00:17:36.737 15:25:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:37.001 15:25:23 bdev_raid -- bdev/bdev_raid.sh@1003 -- # base_malloc_params='-m 32' 00:17:37.002 15:25:23 bdev_raid -- bdev/bdev_raid.sh@1004 -- # run_test raid_state_function_test_sb_md_separate raid_state_function_test raid1 2 true 00:17:37.002 15:25:23 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:17:37.002 15:25:23 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:37.002 15:25:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:37.002 ************************************ 00:17:37.002 START TEST raid_state_function_test_sb_md_separate 00:17:37.002 ************************************ 00:17:37.002 15:25:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:17:37.002 15:25:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:17:37.002 15:25:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:17:37.002 15:25:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:17:37.002 15:25:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:17:37.002 15:25:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:17:37.002 15:25:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:37.002 15:25:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:17:37.002 15:25:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:37.002 15:25:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:37.002 15:25:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:17:37.002 15:25:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:37.002 15:25:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:37.002 15:25:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:37.002 15:25:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:17:37.002 15:25:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:17:37.002 15:25:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # local strip_size 00:17:37.002 15:25:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:17:37.002 15:25:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:17:37.002 15:25:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:17:37.002 15:25:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:17:37.002 15:25:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:17:37.002 15:25:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:17:37.002 15:25:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@229 -- # raid_pid=86987 00:17:37.002 15:25:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:17:37.002 15:25:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 86987' 00:17:37.002 Process raid pid: 86987 00:17:37.002 15:25:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@231 -- # waitforlisten 86987 00:17:37.002 15:25:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@835 -- # '[' -z 86987 ']' 00:17:37.002 15:25:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:37.002 15:25:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:37.002 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:37.002 15:25:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:37.002 15:25:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:37.002 15:25:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:37.002 [2024-11-20 15:25:23.385288] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:17:37.002 [2024-11-20 15:25:23.385415] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:37.260 [2024-11-20 15:25:23.565823] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:37.260 [2024-11-20 15:25:23.683070] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:37.519 [2024-11-20 15:25:23.892939] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:37.519 [2024-11-20 15:25:23.892982] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:37.777 15:25:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:37.777 15:25:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@868 -- # return 0 00:17:37.777 15:25:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:37.777 15:25:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.777 15:25:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:37.777 [2024-11-20 15:25:24.232186] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:37.777 [2024-11-20 15:25:24.232240] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:37.777 [2024-11-20 15:25:24.232252] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:37.777 [2024-11-20 15:25:24.232265] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:37.777 15:25:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.777 15:25:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:37.777 15:25:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:37.777 15:25:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:37.777 15:25:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:37.777 15:25:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:37.777 15:25:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:37.777 15:25:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:37.777 15:25:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:37.777 15:25:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:37.777 15:25:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:37.777 15:25:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:37.777 15:25:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:37.777 15:25:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.777 15:25:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:38.036 15:25:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.036 15:25:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:38.036 "name": "Existed_Raid", 00:17:38.036 "uuid": "6df42597-2c5d-45ee-9bcf-2b8b2b796bb3", 00:17:38.036 "strip_size_kb": 0, 00:17:38.036 "state": "configuring", 00:17:38.036 "raid_level": "raid1", 00:17:38.037 "superblock": true, 00:17:38.037 "num_base_bdevs": 2, 00:17:38.037 "num_base_bdevs_discovered": 0, 00:17:38.037 "num_base_bdevs_operational": 2, 00:17:38.037 "base_bdevs_list": [ 00:17:38.037 { 00:17:38.037 "name": "BaseBdev1", 00:17:38.037 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:38.037 "is_configured": false, 00:17:38.037 "data_offset": 0, 00:17:38.037 "data_size": 0 00:17:38.037 }, 00:17:38.037 { 00:17:38.037 "name": "BaseBdev2", 00:17:38.037 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:38.037 "is_configured": false, 00:17:38.037 "data_offset": 0, 00:17:38.037 "data_size": 0 00:17:38.037 } 00:17:38.037 ] 00:17:38.037 }' 00:17:38.037 15:25:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:38.037 15:25:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:38.295 15:25:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:38.295 15:25:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.295 15:25:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:38.295 [2024-11-20 15:25:24.687498] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:38.295 [2024-11-20 15:25:24.687547] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:17:38.295 15:25:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.295 15:25:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:38.295 15:25:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.295 15:25:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:38.295 [2024-11-20 15:25:24.699493] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:38.295 [2024-11-20 15:25:24.699549] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:38.295 [2024-11-20 15:25:24.699560] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:38.295 [2024-11-20 15:25:24.699577] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:38.295 15:25:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.295 15:25:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1 00:17:38.296 15:25:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.296 15:25:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:38.296 [2024-11-20 15:25:24.751357] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:38.296 BaseBdev1 00:17:38.296 15:25:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.296 15:25:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:17:38.296 15:25:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:17:38.296 15:25:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:38.296 15:25:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # local i 00:17:38.296 15:25:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:38.296 15:25:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:38.296 15:25:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:38.296 15:25:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.296 15:25:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:38.296 15:25:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.296 15:25:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:38.296 15:25:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.296 15:25:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:38.555 [ 00:17:38.555 { 00:17:38.555 "name": "BaseBdev1", 00:17:38.555 "aliases": [ 00:17:38.555 "09203b71-3838-495f-aa15-e20c035e226c" 00:17:38.555 ], 00:17:38.555 "product_name": "Malloc disk", 00:17:38.555 "block_size": 4096, 00:17:38.555 "num_blocks": 8192, 00:17:38.555 "uuid": "09203b71-3838-495f-aa15-e20c035e226c", 00:17:38.555 "md_size": 32, 00:17:38.555 "md_interleave": false, 00:17:38.555 "dif_type": 0, 00:17:38.555 "assigned_rate_limits": { 00:17:38.555 "rw_ios_per_sec": 0, 00:17:38.555 "rw_mbytes_per_sec": 0, 00:17:38.555 "r_mbytes_per_sec": 0, 00:17:38.555 "w_mbytes_per_sec": 0 00:17:38.555 }, 00:17:38.555 "claimed": true, 00:17:38.555 "claim_type": "exclusive_write", 00:17:38.555 "zoned": false, 00:17:38.555 "supported_io_types": { 00:17:38.555 "read": true, 00:17:38.555 "write": true, 00:17:38.555 "unmap": true, 00:17:38.555 "flush": true, 00:17:38.555 "reset": true, 00:17:38.555 "nvme_admin": false, 00:17:38.555 "nvme_io": false, 00:17:38.555 "nvme_io_md": false, 00:17:38.555 "write_zeroes": true, 00:17:38.555 "zcopy": true, 00:17:38.555 "get_zone_info": false, 00:17:38.555 "zone_management": false, 00:17:38.555 "zone_append": false, 00:17:38.555 "compare": false, 00:17:38.555 "compare_and_write": false, 00:17:38.555 "abort": true, 00:17:38.555 "seek_hole": false, 00:17:38.555 "seek_data": false, 00:17:38.555 "copy": true, 00:17:38.555 "nvme_iov_md": false 00:17:38.555 }, 00:17:38.555 "memory_domains": [ 00:17:38.555 { 00:17:38.555 "dma_device_id": "system", 00:17:38.555 "dma_device_type": 1 00:17:38.555 }, 00:17:38.555 { 00:17:38.555 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:38.555 "dma_device_type": 2 00:17:38.555 } 00:17:38.555 ], 00:17:38.555 "driver_specific": {} 00:17:38.555 } 00:17:38.555 ] 00:17:38.555 15:25:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.555 15:25:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@911 -- # return 0 00:17:38.555 15:25:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:38.555 15:25:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:38.555 15:25:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:38.555 15:25:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:38.555 15:25:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:38.555 15:25:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:38.555 15:25:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:38.555 15:25:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:38.555 15:25:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:38.555 15:25:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:38.555 15:25:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:38.555 15:25:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:38.555 15:25:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.555 15:25:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:38.555 15:25:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.555 15:25:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:38.555 "name": "Existed_Raid", 00:17:38.555 "uuid": "8815729b-d1bc-4c33-b6aa-99224d5227e4", 00:17:38.555 "strip_size_kb": 0, 00:17:38.555 "state": "configuring", 00:17:38.555 "raid_level": "raid1", 00:17:38.555 "superblock": true, 00:17:38.555 "num_base_bdevs": 2, 00:17:38.555 "num_base_bdevs_discovered": 1, 00:17:38.555 "num_base_bdevs_operational": 2, 00:17:38.555 "base_bdevs_list": [ 00:17:38.555 { 00:17:38.555 "name": "BaseBdev1", 00:17:38.555 "uuid": "09203b71-3838-495f-aa15-e20c035e226c", 00:17:38.555 "is_configured": true, 00:17:38.555 "data_offset": 256, 00:17:38.555 "data_size": 7936 00:17:38.555 }, 00:17:38.555 { 00:17:38.555 "name": "BaseBdev2", 00:17:38.555 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:38.555 "is_configured": false, 00:17:38.555 "data_offset": 0, 00:17:38.555 "data_size": 0 00:17:38.555 } 00:17:38.555 ] 00:17:38.555 }' 00:17:38.555 15:25:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:38.555 15:25:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:38.814 15:25:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:38.814 15:25:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.814 15:25:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:38.814 [2024-11-20 15:25:25.222857] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:38.814 [2024-11-20 15:25:25.222918] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:17:38.814 15:25:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.814 15:25:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:38.814 15:25:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.814 15:25:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:38.814 [2024-11-20 15:25:25.234941] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:38.814 [2024-11-20 15:25:25.237004] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:38.814 [2024-11-20 15:25:25.237049] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:38.814 15:25:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.814 15:25:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:17:38.814 15:25:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:38.814 15:25:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:38.814 15:25:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:38.814 15:25:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:38.814 15:25:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:38.814 15:25:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:38.814 15:25:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:38.814 15:25:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:38.814 15:25:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:38.814 15:25:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:38.814 15:25:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:38.814 15:25:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:38.814 15:25:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.814 15:25:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:38.814 15:25:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:38.814 15:25:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.814 15:25:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:38.814 "name": "Existed_Raid", 00:17:38.814 "uuid": "7673ebdf-1180-4fb1-aca7-a8409a3d58b2", 00:17:38.814 "strip_size_kb": 0, 00:17:38.814 "state": "configuring", 00:17:38.814 "raid_level": "raid1", 00:17:38.814 "superblock": true, 00:17:38.814 "num_base_bdevs": 2, 00:17:38.814 "num_base_bdevs_discovered": 1, 00:17:38.814 "num_base_bdevs_operational": 2, 00:17:38.814 "base_bdevs_list": [ 00:17:38.814 { 00:17:38.814 "name": "BaseBdev1", 00:17:38.814 "uuid": "09203b71-3838-495f-aa15-e20c035e226c", 00:17:38.814 "is_configured": true, 00:17:38.814 "data_offset": 256, 00:17:38.814 "data_size": 7936 00:17:38.814 }, 00:17:38.814 { 00:17:38.814 "name": "BaseBdev2", 00:17:38.814 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:38.814 "is_configured": false, 00:17:38.814 "data_offset": 0, 00:17:38.814 "data_size": 0 00:17:38.814 } 00:17:38.814 ] 00:17:38.814 }' 00:17:38.814 15:25:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:38.814 15:25:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:39.382 15:25:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2 00:17:39.382 15:25:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.382 15:25:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:39.382 [2024-11-20 15:25:25.699638] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:39.382 [2024-11-20 15:25:25.699899] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:39.382 [2024-11-20 15:25:25.699919] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:39.382 [2024-11-20 15:25:25.700004] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:17:39.382 [2024-11-20 15:25:25.700139] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:39.382 [2024-11-20 15:25:25.700153] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:17:39.382 [2024-11-20 15:25:25.700239] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:39.382 BaseBdev2 00:17:39.382 15:25:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.382 15:25:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:17:39.382 15:25:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:17:39.382 15:25:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:39.382 15:25:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # local i 00:17:39.382 15:25:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:39.382 15:25:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:39.382 15:25:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:39.382 15:25:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.382 15:25:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:39.382 15:25:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.382 15:25:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:39.382 15:25:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.382 15:25:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:39.382 [ 00:17:39.382 { 00:17:39.382 "name": "BaseBdev2", 00:17:39.382 "aliases": [ 00:17:39.382 "c3a09558-f9ae-4573-b192-17b709b89193" 00:17:39.382 ], 00:17:39.382 "product_name": "Malloc disk", 00:17:39.382 "block_size": 4096, 00:17:39.382 "num_blocks": 8192, 00:17:39.382 "uuid": "c3a09558-f9ae-4573-b192-17b709b89193", 00:17:39.382 "md_size": 32, 00:17:39.382 "md_interleave": false, 00:17:39.382 "dif_type": 0, 00:17:39.382 "assigned_rate_limits": { 00:17:39.382 "rw_ios_per_sec": 0, 00:17:39.382 "rw_mbytes_per_sec": 0, 00:17:39.382 "r_mbytes_per_sec": 0, 00:17:39.382 "w_mbytes_per_sec": 0 00:17:39.382 }, 00:17:39.382 "claimed": true, 00:17:39.382 "claim_type": "exclusive_write", 00:17:39.382 "zoned": false, 00:17:39.382 "supported_io_types": { 00:17:39.382 "read": true, 00:17:39.382 "write": true, 00:17:39.382 "unmap": true, 00:17:39.382 "flush": true, 00:17:39.382 "reset": true, 00:17:39.382 "nvme_admin": false, 00:17:39.382 "nvme_io": false, 00:17:39.382 "nvme_io_md": false, 00:17:39.382 "write_zeroes": true, 00:17:39.382 "zcopy": true, 00:17:39.382 "get_zone_info": false, 00:17:39.382 "zone_management": false, 00:17:39.382 "zone_append": false, 00:17:39.382 "compare": false, 00:17:39.382 "compare_and_write": false, 00:17:39.382 "abort": true, 00:17:39.382 "seek_hole": false, 00:17:39.382 "seek_data": false, 00:17:39.382 "copy": true, 00:17:39.382 "nvme_iov_md": false 00:17:39.382 }, 00:17:39.382 "memory_domains": [ 00:17:39.382 { 00:17:39.382 "dma_device_id": "system", 00:17:39.382 "dma_device_type": 1 00:17:39.382 }, 00:17:39.382 { 00:17:39.382 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:39.382 "dma_device_type": 2 00:17:39.382 } 00:17:39.382 ], 00:17:39.382 "driver_specific": {} 00:17:39.382 } 00:17:39.382 ] 00:17:39.382 15:25:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.382 15:25:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@911 -- # return 0 00:17:39.382 15:25:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:39.382 15:25:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:39.382 15:25:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:17:39.382 15:25:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:39.382 15:25:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:39.383 15:25:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:39.383 15:25:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:39.383 15:25:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:39.383 15:25:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:39.383 15:25:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:39.383 15:25:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:39.383 15:25:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:39.383 15:25:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:39.383 15:25:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:39.383 15:25:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.383 15:25:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:39.383 15:25:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.383 15:25:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:39.383 "name": "Existed_Raid", 00:17:39.383 "uuid": "7673ebdf-1180-4fb1-aca7-a8409a3d58b2", 00:17:39.383 "strip_size_kb": 0, 00:17:39.383 "state": "online", 00:17:39.383 "raid_level": "raid1", 00:17:39.383 "superblock": true, 00:17:39.383 "num_base_bdevs": 2, 00:17:39.383 "num_base_bdevs_discovered": 2, 00:17:39.383 "num_base_bdevs_operational": 2, 00:17:39.383 "base_bdevs_list": [ 00:17:39.383 { 00:17:39.383 "name": "BaseBdev1", 00:17:39.383 "uuid": "09203b71-3838-495f-aa15-e20c035e226c", 00:17:39.383 "is_configured": true, 00:17:39.383 "data_offset": 256, 00:17:39.383 "data_size": 7936 00:17:39.383 }, 00:17:39.383 { 00:17:39.383 "name": "BaseBdev2", 00:17:39.383 "uuid": "c3a09558-f9ae-4573-b192-17b709b89193", 00:17:39.383 "is_configured": true, 00:17:39.383 "data_offset": 256, 00:17:39.383 "data_size": 7936 00:17:39.383 } 00:17:39.383 ] 00:17:39.383 }' 00:17:39.383 15:25:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:39.383 15:25:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:39.950 15:25:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:17:39.950 15:25:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:39.950 15:25:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:39.950 15:25:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:39.950 15:25:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:17:39.950 15:25:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:39.950 15:25:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:39.950 15:25:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:39.950 15:25:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.950 15:25:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:39.950 [2024-11-20 15:25:26.183337] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:39.950 15:25:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.950 15:25:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:39.950 "name": "Existed_Raid", 00:17:39.950 "aliases": [ 00:17:39.950 "7673ebdf-1180-4fb1-aca7-a8409a3d58b2" 00:17:39.950 ], 00:17:39.950 "product_name": "Raid Volume", 00:17:39.950 "block_size": 4096, 00:17:39.950 "num_blocks": 7936, 00:17:39.950 "uuid": "7673ebdf-1180-4fb1-aca7-a8409a3d58b2", 00:17:39.950 "md_size": 32, 00:17:39.950 "md_interleave": false, 00:17:39.950 "dif_type": 0, 00:17:39.950 "assigned_rate_limits": { 00:17:39.950 "rw_ios_per_sec": 0, 00:17:39.950 "rw_mbytes_per_sec": 0, 00:17:39.950 "r_mbytes_per_sec": 0, 00:17:39.950 "w_mbytes_per_sec": 0 00:17:39.950 }, 00:17:39.950 "claimed": false, 00:17:39.950 "zoned": false, 00:17:39.950 "supported_io_types": { 00:17:39.950 "read": true, 00:17:39.950 "write": true, 00:17:39.950 "unmap": false, 00:17:39.950 "flush": false, 00:17:39.950 "reset": true, 00:17:39.950 "nvme_admin": false, 00:17:39.950 "nvme_io": false, 00:17:39.950 "nvme_io_md": false, 00:17:39.950 "write_zeroes": true, 00:17:39.950 "zcopy": false, 00:17:39.950 "get_zone_info": false, 00:17:39.950 "zone_management": false, 00:17:39.950 "zone_append": false, 00:17:39.950 "compare": false, 00:17:39.950 "compare_and_write": false, 00:17:39.950 "abort": false, 00:17:39.950 "seek_hole": false, 00:17:39.950 "seek_data": false, 00:17:39.950 "copy": false, 00:17:39.950 "nvme_iov_md": false 00:17:39.950 }, 00:17:39.950 "memory_domains": [ 00:17:39.950 { 00:17:39.950 "dma_device_id": "system", 00:17:39.950 "dma_device_type": 1 00:17:39.950 }, 00:17:39.950 { 00:17:39.950 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:39.950 "dma_device_type": 2 00:17:39.950 }, 00:17:39.950 { 00:17:39.950 "dma_device_id": "system", 00:17:39.950 "dma_device_type": 1 00:17:39.950 }, 00:17:39.950 { 00:17:39.950 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:39.950 "dma_device_type": 2 00:17:39.950 } 00:17:39.950 ], 00:17:39.950 "driver_specific": { 00:17:39.950 "raid": { 00:17:39.950 "uuid": "7673ebdf-1180-4fb1-aca7-a8409a3d58b2", 00:17:39.950 "strip_size_kb": 0, 00:17:39.950 "state": "online", 00:17:39.950 "raid_level": "raid1", 00:17:39.950 "superblock": true, 00:17:39.950 "num_base_bdevs": 2, 00:17:39.950 "num_base_bdevs_discovered": 2, 00:17:39.950 "num_base_bdevs_operational": 2, 00:17:39.950 "base_bdevs_list": [ 00:17:39.950 { 00:17:39.950 "name": "BaseBdev1", 00:17:39.950 "uuid": "09203b71-3838-495f-aa15-e20c035e226c", 00:17:39.950 "is_configured": true, 00:17:39.950 "data_offset": 256, 00:17:39.950 "data_size": 7936 00:17:39.950 }, 00:17:39.950 { 00:17:39.950 "name": "BaseBdev2", 00:17:39.950 "uuid": "c3a09558-f9ae-4573-b192-17b709b89193", 00:17:39.950 "is_configured": true, 00:17:39.950 "data_offset": 256, 00:17:39.950 "data_size": 7936 00:17:39.950 } 00:17:39.950 ] 00:17:39.950 } 00:17:39.950 } 00:17:39.950 }' 00:17:39.950 15:25:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:39.950 15:25:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:17:39.950 BaseBdev2' 00:17:39.950 15:25:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:39.950 15:25:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:17:39.950 15:25:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:39.950 15:25:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:17:39.950 15:25:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.950 15:25:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:39.950 15:25:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:39.950 15:25:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.950 15:25:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:17:39.950 15:25:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:17:39.950 15:25:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:39.950 15:25:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:39.950 15:25:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.950 15:25:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:39.950 15:25:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:39.950 15:25:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.950 15:25:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:17:39.950 15:25:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:17:39.950 15:25:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:17:39.950 15:25:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.950 15:25:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:39.950 [2024-11-20 15:25:26.414894] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:40.209 15:25:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.209 15:25:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@260 -- # local expected_state 00:17:40.209 15:25:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:17:40.209 15:25:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:40.209 15:25:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:17:40.209 15:25:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:17:40.209 15:25:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:17:40.209 15:25:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:40.209 15:25:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:40.209 15:25:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:40.209 15:25:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:40.209 15:25:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:40.209 15:25:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:40.209 15:25:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:40.209 15:25:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:40.209 15:25:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:40.209 15:25:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:40.209 15:25:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:40.209 15:25:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.209 15:25:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:40.209 15:25:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.209 15:25:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:40.209 "name": "Existed_Raid", 00:17:40.209 "uuid": "7673ebdf-1180-4fb1-aca7-a8409a3d58b2", 00:17:40.209 "strip_size_kb": 0, 00:17:40.209 "state": "online", 00:17:40.209 "raid_level": "raid1", 00:17:40.209 "superblock": true, 00:17:40.209 "num_base_bdevs": 2, 00:17:40.209 "num_base_bdevs_discovered": 1, 00:17:40.209 "num_base_bdevs_operational": 1, 00:17:40.209 "base_bdevs_list": [ 00:17:40.209 { 00:17:40.209 "name": null, 00:17:40.209 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:40.209 "is_configured": false, 00:17:40.209 "data_offset": 0, 00:17:40.209 "data_size": 7936 00:17:40.209 }, 00:17:40.209 { 00:17:40.209 "name": "BaseBdev2", 00:17:40.209 "uuid": "c3a09558-f9ae-4573-b192-17b709b89193", 00:17:40.209 "is_configured": true, 00:17:40.209 "data_offset": 256, 00:17:40.209 "data_size": 7936 00:17:40.209 } 00:17:40.209 ] 00:17:40.209 }' 00:17:40.209 15:25:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:40.209 15:25:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:40.468 15:25:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:17:40.468 15:25:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:40.468 15:25:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:40.468 15:25:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:40.468 15:25:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.468 15:25:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:40.727 15:25:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.727 15:25:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:40.727 15:25:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:40.727 15:25:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:17:40.727 15:25:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.727 15:25:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:40.727 [2024-11-20 15:25:26.993841] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:40.727 [2024-11-20 15:25:26.993951] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:40.727 [2024-11-20 15:25:27.100485] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:40.727 [2024-11-20 15:25:27.100752] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:40.727 [2024-11-20 15:25:27.100868] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:17:40.727 15:25:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.727 15:25:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:40.727 15:25:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:40.727 15:25:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:40.727 15:25:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:17:40.727 15:25:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.727 15:25:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:40.727 15:25:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.727 15:25:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:17:40.727 15:25:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:17:40.727 15:25:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:17:40.727 15:25:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@326 -- # killprocess 86987 00:17:40.727 15:25:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@954 -- # '[' -z 86987 ']' 00:17:40.727 15:25:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@958 -- # kill -0 86987 00:17:40.727 15:25:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@959 -- # uname 00:17:40.727 15:25:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:40.727 15:25:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86987 00:17:40.727 15:25:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:40.727 killing process with pid 86987 00:17:40.727 15:25:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:40.727 15:25:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86987' 00:17:40.727 15:25:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@973 -- # kill 86987 00:17:40.727 [2024-11-20 15:25:27.196803] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:40.727 15:25:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@978 -- # wait 86987 00:17:40.986 [2024-11-20 15:25:27.214729] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:41.922 15:25:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@328 -- # return 0 00:17:41.922 00:17:41.922 real 0m5.066s 00:17:41.922 user 0m7.232s 00:17:41.922 sys 0m0.975s 00:17:41.922 15:25:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:41.922 ************************************ 00:17:41.922 15:25:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:41.922 END TEST raid_state_function_test_sb_md_separate 00:17:41.922 ************************************ 00:17:42.180 15:25:28 bdev_raid -- bdev/bdev_raid.sh@1005 -- # run_test raid_superblock_test_md_separate raid_superblock_test raid1 2 00:17:42.180 15:25:28 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:17:42.180 15:25:28 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:42.180 15:25:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:42.180 ************************************ 00:17:42.180 START TEST raid_superblock_test_md_separate 00:17:42.180 ************************************ 00:17:42.180 15:25:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:17:42.180 15:25:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:17:42.180 15:25:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:17:42.180 15:25:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:17:42.180 15:25:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:17:42.180 15:25:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:17:42.180 15:25:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:17:42.180 15:25:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:17:42.180 15:25:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:17:42.180 15:25:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:17:42.180 15:25:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@399 -- # local strip_size 00:17:42.180 15:25:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:17:42.180 15:25:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:17:42.180 15:25:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:17:42.180 15:25:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:17:42.180 15:25:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:17:42.180 15:25:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@412 -- # raid_pid=87234 00:17:42.180 15:25:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:17:42.180 15:25:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@413 -- # waitforlisten 87234 00:17:42.180 15:25:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@835 -- # '[' -z 87234 ']' 00:17:42.180 15:25:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:42.180 15:25:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:42.180 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:42.180 15:25:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:42.180 15:25:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:42.180 15:25:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:42.180 [2024-11-20 15:25:28.534083] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:17:42.180 [2024-11-20 15:25:28.534724] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87234 ] 00:17:42.438 [2024-11-20 15:25:28.733377] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:42.438 [2024-11-20 15:25:28.855306] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:42.697 [2024-11-20 15:25:29.070349] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:42.697 [2024-11-20 15:25:29.070419] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:42.954 15:25:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:42.954 15:25:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@868 -- # return 0 00:17:42.954 15:25:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:17:42.954 15:25:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:42.954 15:25:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:17:42.954 15:25:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:17:42.954 15:25:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:17:42.954 15:25:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:42.954 15:25:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:42.954 15:25:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:42.954 15:25:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc1 00:17:42.955 15:25:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.955 15:25:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:42.955 malloc1 00:17:42.955 15:25:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.955 15:25:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:42.955 15:25:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.955 15:25:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:42.955 [2024-11-20 15:25:29.424517] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:42.955 [2024-11-20 15:25:29.424741] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:42.955 [2024-11-20 15:25:29.424805] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:42.955 [2024-11-20 15:25:29.424892] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:42.955 [2024-11-20 15:25:29.427153] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:42.955 [2024-11-20 15:25:29.427326] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:42.955 pt1 00:17:42.955 15:25:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.955 15:25:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:42.955 15:25:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:42.955 15:25:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:17:42.955 15:25:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:17:42.955 15:25:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:17:42.955 15:25:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:42.955 15:25:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:42.955 15:25:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:42.955 15:25:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc2 00:17:42.955 15:25:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.955 15:25:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:43.224 malloc2 00:17:43.224 15:25:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.224 15:25:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:43.224 15:25:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.224 15:25:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:43.224 [2024-11-20 15:25:29.478200] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:43.224 [2024-11-20 15:25:29.478403] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:43.224 [2024-11-20 15:25:29.478464] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:43.224 [2024-11-20 15:25:29.478564] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:43.224 [2024-11-20 15:25:29.480784] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:43.224 [2024-11-20 15:25:29.480931] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:43.224 pt2 00:17:43.224 15:25:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.224 15:25:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:43.224 15:25:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:43.224 15:25:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:17:43.224 15:25:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.225 15:25:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:43.225 [2024-11-20 15:25:29.490210] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:43.225 [2024-11-20 15:25:29.492289] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:43.225 [2024-11-20 15:25:29.492480] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:43.225 [2024-11-20 15:25:29.492496] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:43.225 [2024-11-20 15:25:29.492586] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:17:43.225 [2024-11-20 15:25:29.492738] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:43.225 [2024-11-20 15:25:29.492753] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:43.225 [2024-11-20 15:25:29.492885] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:43.225 15:25:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.225 15:25:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:43.225 15:25:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:43.225 15:25:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:43.225 15:25:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:43.225 15:25:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:43.225 15:25:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:43.225 15:25:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:43.225 15:25:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:43.225 15:25:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:43.225 15:25:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:43.225 15:25:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:43.225 15:25:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:43.225 15:25:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.225 15:25:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:43.225 15:25:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.225 15:25:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:43.225 "name": "raid_bdev1", 00:17:43.225 "uuid": "e0fc6825-c743-4ec9-85f4-dff55449d4b9", 00:17:43.225 "strip_size_kb": 0, 00:17:43.225 "state": "online", 00:17:43.225 "raid_level": "raid1", 00:17:43.225 "superblock": true, 00:17:43.225 "num_base_bdevs": 2, 00:17:43.225 "num_base_bdevs_discovered": 2, 00:17:43.225 "num_base_bdevs_operational": 2, 00:17:43.225 "base_bdevs_list": [ 00:17:43.225 { 00:17:43.225 "name": "pt1", 00:17:43.225 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:43.225 "is_configured": true, 00:17:43.225 "data_offset": 256, 00:17:43.225 "data_size": 7936 00:17:43.225 }, 00:17:43.225 { 00:17:43.225 "name": "pt2", 00:17:43.225 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:43.225 "is_configured": true, 00:17:43.225 "data_offset": 256, 00:17:43.225 "data_size": 7936 00:17:43.225 } 00:17:43.225 ] 00:17:43.225 }' 00:17:43.225 15:25:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:43.225 15:25:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:43.498 15:25:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:17:43.498 15:25:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:43.498 15:25:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:43.498 15:25:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:43.498 15:25:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:17:43.498 15:25:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:43.498 15:25:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:43.498 15:25:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.498 15:25:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:43.498 15:25:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:43.498 [2024-11-20 15:25:29.949877] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:43.758 15:25:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.758 15:25:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:43.758 "name": "raid_bdev1", 00:17:43.758 "aliases": [ 00:17:43.758 "e0fc6825-c743-4ec9-85f4-dff55449d4b9" 00:17:43.758 ], 00:17:43.758 "product_name": "Raid Volume", 00:17:43.758 "block_size": 4096, 00:17:43.758 "num_blocks": 7936, 00:17:43.758 "uuid": "e0fc6825-c743-4ec9-85f4-dff55449d4b9", 00:17:43.758 "md_size": 32, 00:17:43.758 "md_interleave": false, 00:17:43.758 "dif_type": 0, 00:17:43.758 "assigned_rate_limits": { 00:17:43.758 "rw_ios_per_sec": 0, 00:17:43.758 "rw_mbytes_per_sec": 0, 00:17:43.758 "r_mbytes_per_sec": 0, 00:17:43.758 "w_mbytes_per_sec": 0 00:17:43.758 }, 00:17:43.758 "claimed": false, 00:17:43.758 "zoned": false, 00:17:43.758 "supported_io_types": { 00:17:43.758 "read": true, 00:17:43.758 "write": true, 00:17:43.758 "unmap": false, 00:17:43.758 "flush": false, 00:17:43.758 "reset": true, 00:17:43.758 "nvme_admin": false, 00:17:43.758 "nvme_io": false, 00:17:43.758 "nvme_io_md": false, 00:17:43.758 "write_zeroes": true, 00:17:43.758 "zcopy": false, 00:17:43.758 "get_zone_info": false, 00:17:43.758 "zone_management": false, 00:17:43.758 "zone_append": false, 00:17:43.758 "compare": false, 00:17:43.758 "compare_and_write": false, 00:17:43.758 "abort": false, 00:17:43.758 "seek_hole": false, 00:17:43.758 "seek_data": false, 00:17:43.758 "copy": false, 00:17:43.758 "nvme_iov_md": false 00:17:43.758 }, 00:17:43.758 "memory_domains": [ 00:17:43.758 { 00:17:43.758 "dma_device_id": "system", 00:17:43.758 "dma_device_type": 1 00:17:43.758 }, 00:17:43.758 { 00:17:43.758 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:43.758 "dma_device_type": 2 00:17:43.758 }, 00:17:43.758 { 00:17:43.758 "dma_device_id": "system", 00:17:43.758 "dma_device_type": 1 00:17:43.758 }, 00:17:43.758 { 00:17:43.758 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:43.758 "dma_device_type": 2 00:17:43.758 } 00:17:43.758 ], 00:17:43.758 "driver_specific": { 00:17:43.758 "raid": { 00:17:43.758 "uuid": "e0fc6825-c743-4ec9-85f4-dff55449d4b9", 00:17:43.758 "strip_size_kb": 0, 00:17:43.758 "state": "online", 00:17:43.758 "raid_level": "raid1", 00:17:43.758 "superblock": true, 00:17:43.758 "num_base_bdevs": 2, 00:17:43.758 "num_base_bdevs_discovered": 2, 00:17:43.758 "num_base_bdevs_operational": 2, 00:17:43.758 "base_bdevs_list": [ 00:17:43.758 { 00:17:43.758 "name": "pt1", 00:17:43.758 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:43.758 "is_configured": true, 00:17:43.758 "data_offset": 256, 00:17:43.758 "data_size": 7936 00:17:43.758 }, 00:17:43.758 { 00:17:43.758 "name": "pt2", 00:17:43.758 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:43.758 "is_configured": true, 00:17:43.758 "data_offset": 256, 00:17:43.758 "data_size": 7936 00:17:43.758 } 00:17:43.758 ] 00:17:43.758 } 00:17:43.758 } 00:17:43.758 }' 00:17:43.758 15:25:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:43.758 15:25:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:43.758 pt2' 00:17:43.758 15:25:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:43.758 15:25:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:17:43.758 15:25:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:43.758 15:25:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:43.758 15:25:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:43.758 15:25:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.758 15:25:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:43.758 15:25:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.758 15:25:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:17:43.758 15:25:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:17:43.758 15:25:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:43.758 15:25:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:43.758 15:25:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:43.758 15:25:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.758 15:25:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:43.758 15:25:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.758 15:25:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:17:43.758 15:25:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:17:43.758 15:25:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:43.758 15:25:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.758 15:25:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:43.758 15:25:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:17:43.758 [2024-11-20 15:25:30.161524] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:43.758 15:25:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.758 15:25:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=e0fc6825-c743-4ec9-85f4-dff55449d4b9 00:17:43.758 15:25:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@436 -- # '[' -z e0fc6825-c743-4ec9-85f4-dff55449d4b9 ']' 00:17:43.758 15:25:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:43.758 15:25:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.758 15:25:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:43.758 [2024-11-20 15:25:30.205186] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:43.758 [2024-11-20 15:25:30.205364] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:43.758 [2024-11-20 15:25:30.205487] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:43.758 [2024-11-20 15:25:30.205557] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:43.758 [2024-11-20 15:25:30.205572] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:43.758 15:25:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.758 15:25:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:43.758 15:25:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.758 15:25:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:43.758 15:25:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:17:43.758 15:25:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.018 15:25:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:17:44.018 15:25:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:17:44.018 15:25:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:44.018 15:25:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:17:44.018 15:25:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.018 15:25:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:44.018 15:25:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.018 15:25:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:44.018 15:25:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:17:44.018 15:25:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.018 15:25:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:44.018 15:25:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.018 15:25:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:17:44.018 15:25:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:17:44.018 15:25:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.018 15:25:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:44.018 15:25:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.018 15:25:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:17:44.018 15:25:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:17:44.018 15:25:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@652 -- # local es=0 00:17:44.018 15:25:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:17:44.018 15:25:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:44.018 15:25:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:44.018 15:25:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:44.018 15:25:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:44.018 15:25:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:17:44.019 15:25:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.019 15:25:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:44.019 [2024-11-20 15:25:30.345001] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:17:44.019 [2024-11-20 15:25:30.347139] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:17:44.019 [2024-11-20 15:25:30.347216] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:17:44.019 [2024-11-20 15:25:30.347280] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:17:44.019 [2024-11-20 15:25:30.347299] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:44.019 [2024-11-20 15:25:30.347311] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:17:44.019 request: 00:17:44.019 { 00:17:44.019 "name": "raid_bdev1", 00:17:44.019 "raid_level": "raid1", 00:17:44.019 "base_bdevs": [ 00:17:44.019 "malloc1", 00:17:44.019 "malloc2" 00:17:44.019 ], 00:17:44.019 "superblock": false, 00:17:44.019 "method": "bdev_raid_create", 00:17:44.019 "req_id": 1 00:17:44.019 } 00:17:44.019 Got JSON-RPC error response 00:17:44.019 response: 00:17:44.019 { 00:17:44.019 "code": -17, 00:17:44.019 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:17:44.019 } 00:17:44.019 15:25:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:44.019 15:25:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@655 -- # es=1 00:17:44.019 15:25:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:44.019 15:25:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:44.019 15:25:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:44.019 15:25:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:44.019 15:25:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:17:44.019 15:25:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.019 15:25:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:44.019 15:25:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.019 15:25:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:17:44.019 15:25:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:17:44.019 15:25:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:44.019 15:25:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.019 15:25:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:44.019 [2024-11-20 15:25:30.412893] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:44.019 [2024-11-20 15:25:30.412968] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:44.019 [2024-11-20 15:25:30.412988] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:17:44.019 [2024-11-20 15:25:30.413003] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:44.019 [2024-11-20 15:25:30.415249] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:44.019 [2024-11-20 15:25:30.415685] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:44.019 [2024-11-20 15:25:30.415771] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:44.019 [2024-11-20 15:25:30.415838] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:44.019 pt1 00:17:44.019 15:25:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.019 15:25:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:17:44.019 15:25:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:44.019 15:25:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:44.019 15:25:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:44.019 15:25:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:44.019 15:25:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:44.019 15:25:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:44.019 15:25:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:44.019 15:25:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:44.019 15:25:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:44.019 15:25:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:44.019 15:25:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:44.019 15:25:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.019 15:25:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:44.019 15:25:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.019 15:25:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:44.019 "name": "raid_bdev1", 00:17:44.019 "uuid": "e0fc6825-c743-4ec9-85f4-dff55449d4b9", 00:17:44.019 "strip_size_kb": 0, 00:17:44.019 "state": "configuring", 00:17:44.019 "raid_level": "raid1", 00:17:44.019 "superblock": true, 00:17:44.019 "num_base_bdevs": 2, 00:17:44.019 "num_base_bdevs_discovered": 1, 00:17:44.019 "num_base_bdevs_operational": 2, 00:17:44.019 "base_bdevs_list": [ 00:17:44.019 { 00:17:44.019 "name": "pt1", 00:17:44.019 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:44.019 "is_configured": true, 00:17:44.019 "data_offset": 256, 00:17:44.019 "data_size": 7936 00:17:44.019 }, 00:17:44.019 { 00:17:44.019 "name": null, 00:17:44.019 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:44.019 "is_configured": false, 00:17:44.019 "data_offset": 256, 00:17:44.019 "data_size": 7936 00:17:44.019 } 00:17:44.019 ] 00:17:44.019 }' 00:17:44.019 15:25:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:44.019 15:25:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:44.589 15:25:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:17:44.589 15:25:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:17:44.589 15:25:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:44.589 15:25:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:44.589 15:25:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.589 15:25:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:44.589 [2024-11-20 15:25:30.868388] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:44.589 [2024-11-20 15:25:30.868475] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:44.589 [2024-11-20 15:25:30.868498] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:17:44.589 [2024-11-20 15:25:30.868513] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:44.589 [2024-11-20 15:25:30.868756] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:44.589 [2024-11-20 15:25:30.868778] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:44.589 [2024-11-20 15:25:30.868833] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:44.589 [2024-11-20 15:25:30.868857] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:44.589 [2024-11-20 15:25:30.868966] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:44.589 [2024-11-20 15:25:30.868979] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:44.589 [2024-11-20 15:25:30.869054] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:17:44.589 [2024-11-20 15:25:30.869161] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:44.589 [2024-11-20 15:25:30.869170] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:17:44.589 [2024-11-20 15:25:30.869276] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:44.589 pt2 00:17:44.589 15:25:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.589 15:25:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:17:44.589 15:25:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:44.589 15:25:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:44.589 15:25:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:44.589 15:25:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:44.589 15:25:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:44.589 15:25:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:44.589 15:25:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:44.589 15:25:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:44.589 15:25:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:44.589 15:25:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:44.589 15:25:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:44.589 15:25:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:44.589 15:25:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:44.589 15:25:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.589 15:25:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:44.589 15:25:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.589 15:25:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:44.589 "name": "raid_bdev1", 00:17:44.589 "uuid": "e0fc6825-c743-4ec9-85f4-dff55449d4b9", 00:17:44.589 "strip_size_kb": 0, 00:17:44.589 "state": "online", 00:17:44.589 "raid_level": "raid1", 00:17:44.589 "superblock": true, 00:17:44.589 "num_base_bdevs": 2, 00:17:44.589 "num_base_bdevs_discovered": 2, 00:17:44.589 "num_base_bdevs_operational": 2, 00:17:44.589 "base_bdevs_list": [ 00:17:44.589 { 00:17:44.589 "name": "pt1", 00:17:44.589 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:44.589 "is_configured": true, 00:17:44.589 "data_offset": 256, 00:17:44.589 "data_size": 7936 00:17:44.589 }, 00:17:44.589 { 00:17:44.589 "name": "pt2", 00:17:44.589 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:44.589 "is_configured": true, 00:17:44.589 "data_offset": 256, 00:17:44.589 "data_size": 7936 00:17:44.589 } 00:17:44.589 ] 00:17:44.589 }' 00:17:44.589 15:25:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:44.589 15:25:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:44.848 15:25:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:17:44.849 15:25:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:44.849 15:25:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:44.849 15:25:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:44.849 15:25:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:17:44.849 15:25:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:44.849 15:25:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:44.849 15:25:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.849 15:25:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:44.849 15:25:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:44.849 [2024-11-20 15:25:31.252105] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:44.849 15:25:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.849 15:25:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:44.849 "name": "raid_bdev1", 00:17:44.849 "aliases": [ 00:17:44.849 "e0fc6825-c743-4ec9-85f4-dff55449d4b9" 00:17:44.849 ], 00:17:44.849 "product_name": "Raid Volume", 00:17:44.849 "block_size": 4096, 00:17:44.849 "num_blocks": 7936, 00:17:44.849 "uuid": "e0fc6825-c743-4ec9-85f4-dff55449d4b9", 00:17:44.849 "md_size": 32, 00:17:44.849 "md_interleave": false, 00:17:44.849 "dif_type": 0, 00:17:44.849 "assigned_rate_limits": { 00:17:44.849 "rw_ios_per_sec": 0, 00:17:44.849 "rw_mbytes_per_sec": 0, 00:17:44.849 "r_mbytes_per_sec": 0, 00:17:44.849 "w_mbytes_per_sec": 0 00:17:44.849 }, 00:17:44.849 "claimed": false, 00:17:44.849 "zoned": false, 00:17:44.849 "supported_io_types": { 00:17:44.849 "read": true, 00:17:44.849 "write": true, 00:17:44.849 "unmap": false, 00:17:44.849 "flush": false, 00:17:44.849 "reset": true, 00:17:44.849 "nvme_admin": false, 00:17:44.849 "nvme_io": false, 00:17:44.849 "nvme_io_md": false, 00:17:44.849 "write_zeroes": true, 00:17:44.849 "zcopy": false, 00:17:44.849 "get_zone_info": false, 00:17:44.849 "zone_management": false, 00:17:44.849 "zone_append": false, 00:17:44.849 "compare": false, 00:17:44.849 "compare_and_write": false, 00:17:44.849 "abort": false, 00:17:44.849 "seek_hole": false, 00:17:44.849 "seek_data": false, 00:17:44.849 "copy": false, 00:17:44.849 "nvme_iov_md": false 00:17:44.849 }, 00:17:44.849 "memory_domains": [ 00:17:44.849 { 00:17:44.849 "dma_device_id": "system", 00:17:44.849 "dma_device_type": 1 00:17:44.849 }, 00:17:44.849 { 00:17:44.849 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:44.849 "dma_device_type": 2 00:17:44.849 }, 00:17:44.849 { 00:17:44.849 "dma_device_id": "system", 00:17:44.849 "dma_device_type": 1 00:17:44.849 }, 00:17:44.849 { 00:17:44.849 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:44.849 "dma_device_type": 2 00:17:44.849 } 00:17:44.849 ], 00:17:44.849 "driver_specific": { 00:17:44.849 "raid": { 00:17:44.849 "uuid": "e0fc6825-c743-4ec9-85f4-dff55449d4b9", 00:17:44.849 "strip_size_kb": 0, 00:17:44.849 "state": "online", 00:17:44.849 "raid_level": "raid1", 00:17:44.849 "superblock": true, 00:17:44.849 "num_base_bdevs": 2, 00:17:44.849 "num_base_bdevs_discovered": 2, 00:17:44.849 "num_base_bdevs_operational": 2, 00:17:44.849 "base_bdevs_list": [ 00:17:44.849 { 00:17:44.849 "name": "pt1", 00:17:44.849 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:44.849 "is_configured": true, 00:17:44.849 "data_offset": 256, 00:17:44.849 "data_size": 7936 00:17:44.849 }, 00:17:44.849 { 00:17:44.849 "name": "pt2", 00:17:44.849 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:44.849 "is_configured": true, 00:17:44.849 "data_offset": 256, 00:17:44.849 "data_size": 7936 00:17:44.849 } 00:17:44.849 ] 00:17:44.849 } 00:17:44.849 } 00:17:44.849 }' 00:17:44.849 15:25:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:44.849 15:25:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:44.849 pt2' 00:17:44.849 15:25:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:45.109 15:25:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:17:45.109 15:25:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:45.109 15:25:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:45.109 15:25:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.109 15:25:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:45.109 15:25:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:45.109 15:25:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.109 15:25:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:17:45.109 15:25:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:17:45.109 15:25:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:45.109 15:25:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:45.109 15:25:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:45.109 15:25:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.109 15:25:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:45.109 15:25:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.109 15:25:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:17:45.109 15:25:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:17:45.109 15:25:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:45.109 15:25:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:17:45.109 15:25:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.109 15:25:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:45.109 [2024-11-20 15:25:31.460079] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:45.109 15:25:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.109 15:25:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # '[' e0fc6825-c743-4ec9-85f4-dff55449d4b9 '!=' e0fc6825-c743-4ec9-85f4-dff55449d4b9 ']' 00:17:45.109 15:25:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:17:45.109 15:25:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:45.109 15:25:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:17:45.109 15:25:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:17:45.109 15:25:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.109 15:25:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:45.109 [2024-11-20 15:25:31.503855] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:17:45.109 15:25:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.109 15:25:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:45.109 15:25:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:45.109 15:25:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:45.109 15:25:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:45.109 15:25:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:45.109 15:25:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:45.109 15:25:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:45.109 15:25:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:45.109 15:25:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:45.109 15:25:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:45.109 15:25:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:45.109 15:25:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:45.109 15:25:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.109 15:25:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:45.109 15:25:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.109 15:25:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:45.109 "name": "raid_bdev1", 00:17:45.109 "uuid": "e0fc6825-c743-4ec9-85f4-dff55449d4b9", 00:17:45.109 "strip_size_kb": 0, 00:17:45.109 "state": "online", 00:17:45.109 "raid_level": "raid1", 00:17:45.109 "superblock": true, 00:17:45.109 "num_base_bdevs": 2, 00:17:45.109 "num_base_bdevs_discovered": 1, 00:17:45.109 "num_base_bdevs_operational": 1, 00:17:45.109 "base_bdevs_list": [ 00:17:45.109 { 00:17:45.109 "name": null, 00:17:45.109 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:45.109 "is_configured": false, 00:17:45.109 "data_offset": 0, 00:17:45.109 "data_size": 7936 00:17:45.109 }, 00:17:45.109 { 00:17:45.109 "name": "pt2", 00:17:45.109 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:45.109 "is_configured": true, 00:17:45.109 "data_offset": 256, 00:17:45.109 "data_size": 7936 00:17:45.109 } 00:17:45.109 ] 00:17:45.109 }' 00:17:45.109 15:25:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:45.109 15:25:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:45.678 15:25:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:45.678 15:25:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.678 15:25:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:45.678 [2024-11-20 15:25:31.907513] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:45.678 [2024-11-20 15:25:31.907732] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:45.678 [2024-11-20 15:25:31.907841] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:45.678 [2024-11-20 15:25:31.907905] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:45.678 [2024-11-20 15:25:31.907919] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:17:45.678 15:25:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.678 15:25:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:45.678 15:25:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.678 15:25:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:45.678 15:25:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:17:45.678 15:25:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.678 15:25:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:17:45.678 15:25:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:17:45.678 15:25:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:17:45.678 15:25:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:45.678 15:25:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:17:45.678 15:25:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.678 15:25:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:45.678 15:25:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.678 15:25:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:17:45.678 15:25:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:45.678 15:25:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:17:45.678 15:25:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:17:45.678 15:25:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@519 -- # i=1 00:17:45.678 15:25:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:45.678 15:25:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.678 15:25:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:45.678 [2024-11-20 15:25:31.975439] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:45.678 [2024-11-20 15:25:31.975735] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:45.678 [2024-11-20 15:25:31.975765] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:17:45.678 [2024-11-20 15:25:31.975780] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:45.678 [2024-11-20 15:25:31.978068] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:45.679 [2024-11-20 15:25:31.978116] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:45.679 [2024-11-20 15:25:31.978181] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:45.679 [2024-11-20 15:25:31.978230] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:45.679 [2024-11-20 15:25:31.978336] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:17:45.679 [2024-11-20 15:25:31.978351] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:45.679 [2024-11-20 15:25:31.978432] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:17:45.679 [2024-11-20 15:25:31.978529] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:17:45.679 [2024-11-20 15:25:31.978538] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:17:45.679 [2024-11-20 15:25:31.978637] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:45.679 pt2 00:17:45.679 15:25:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.679 15:25:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:45.679 15:25:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:45.679 15:25:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:45.679 15:25:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:45.679 15:25:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:45.679 15:25:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:45.679 15:25:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:45.679 15:25:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:45.679 15:25:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:45.679 15:25:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:45.679 15:25:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:45.679 15:25:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:45.679 15:25:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.679 15:25:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:45.679 15:25:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.679 15:25:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:45.679 "name": "raid_bdev1", 00:17:45.679 "uuid": "e0fc6825-c743-4ec9-85f4-dff55449d4b9", 00:17:45.679 "strip_size_kb": 0, 00:17:45.679 "state": "online", 00:17:45.679 "raid_level": "raid1", 00:17:45.679 "superblock": true, 00:17:45.679 "num_base_bdevs": 2, 00:17:45.679 "num_base_bdevs_discovered": 1, 00:17:45.679 "num_base_bdevs_operational": 1, 00:17:45.679 "base_bdevs_list": [ 00:17:45.679 { 00:17:45.679 "name": null, 00:17:45.679 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:45.679 "is_configured": false, 00:17:45.679 "data_offset": 256, 00:17:45.679 "data_size": 7936 00:17:45.679 }, 00:17:45.679 { 00:17:45.679 "name": "pt2", 00:17:45.679 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:45.679 "is_configured": true, 00:17:45.679 "data_offset": 256, 00:17:45.679 "data_size": 7936 00:17:45.679 } 00:17:45.679 ] 00:17:45.679 }' 00:17:45.679 15:25:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:45.679 15:25:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:45.939 15:25:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:45.939 15:25:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.939 15:25:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:45.939 [2024-11-20 15:25:32.378842] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:45.939 [2024-11-20 15:25:32.378879] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:45.939 [2024-11-20 15:25:32.378959] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:45.939 [2024-11-20 15:25:32.379013] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:45.939 [2024-11-20 15:25:32.379025] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:17:45.939 15:25:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.939 15:25:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:17:45.939 15:25:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:45.939 15:25:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.939 15:25:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:45.939 15:25:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.198 15:25:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:17:46.198 15:25:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:17:46.198 15:25:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:17:46.199 15:25:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:46.199 15:25:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.199 15:25:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:46.199 [2024-11-20 15:25:32.426914] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:46.199 [2024-11-20 15:25:32.426990] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:46.199 [2024-11-20 15:25:32.427015] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:17:46.199 [2024-11-20 15:25:32.427028] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:46.199 [2024-11-20 15:25:32.429438] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:46.199 [2024-11-20 15:25:32.429483] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:46.199 [2024-11-20 15:25:32.429550] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:46.199 [2024-11-20 15:25:32.429604] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:46.199 [2024-11-20 15:25:32.429747] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:17:46.199 [2024-11-20 15:25:32.429760] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:46.199 [2024-11-20 15:25:32.429781] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:17:46.199 [2024-11-20 15:25:32.429852] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:46.199 [2024-11-20 15:25:32.430124] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:17:46.199 [2024-11-20 15:25:32.430145] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:46.199 [2024-11-20 15:25:32.430224] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:46.199 [2024-11-20 15:25:32.430346] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:17:46.199 [2024-11-20 15:25:32.430360] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:17:46.199 [2024-11-20 15:25:32.430479] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:46.199 pt1 00:17:46.199 15:25:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.199 15:25:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:17:46.199 15:25:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:46.199 15:25:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:46.199 15:25:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:46.199 15:25:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:46.199 15:25:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:46.199 15:25:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:46.199 15:25:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:46.199 15:25:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:46.199 15:25:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:46.199 15:25:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:46.199 15:25:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:46.199 15:25:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:46.199 15:25:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.199 15:25:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:46.199 15:25:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.199 15:25:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:46.199 "name": "raid_bdev1", 00:17:46.199 "uuid": "e0fc6825-c743-4ec9-85f4-dff55449d4b9", 00:17:46.199 "strip_size_kb": 0, 00:17:46.199 "state": "online", 00:17:46.199 "raid_level": "raid1", 00:17:46.199 "superblock": true, 00:17:46.199 "num_base_bdevs": 2, 00:17:46.199 "num_base_bdevs_discovered": 1, 00:17:46.199 "num_base_bdevs_operational": 1, 00:17:46.199 "base_bdevs_list": [ 00:17:46.199 { 00:17:46.199 "name": null, 00:17:46.199 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:46.199 "is_configured": false, 00:17:46.199 "data_offset": 256, 00:17:46.199 "data_size": 7936 00:17:46.199 }, 00:17:46.199 { 00:17:46.199 "name": "pt2", 00:17:46.199 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:46.199 "is_configured": true, 00:17:46.199 "data_offset": 256, 00:17:46.199 "data_size": 7936 00:17:46.199 } 00:17:46.199 ] 00:17:46.199 }' 00:17:46.199 15:25:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:46.199 15:25:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:46.459 15:25:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:17:46.459 15:25:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:17:46.459 15:25:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.459 15:25:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:46.459 15:25:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.459 15:25:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:17:46.459 15:25:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:46.459 15:25:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:17:46.459 15:25:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.459 15:25:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:46.459 [2024-11-20 15:25:32.930378] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:46.718 15:25:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.718 15:25:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # '[' e0fc6825-c743-4ec9-85f4-dff55449d4b9 '!=' e0fc6825-c743-4ec9-85f4-dff55449d4b9 ']' 00:17:46.718 15:25:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@563 -- # killprocess 87234 00:17:46.718 15:25:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@954 -- # '[' -z 87234 ']' 00:17:46.718 15:25:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@958 -- # kill -0 87234 00:17:46.718 15:25:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@959 -- # uname 00:17:46.718 15:25:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:46.718 15:25:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87234 00:17:46.718 killing process with pid 87234 00:17:46.718 15:25:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:46.718 15:25:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:46.718 15:25:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87234' 00:17:46.718 15:25:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@973 -- # kill 87234 00:17:46.718 [2024-11-20 15:25:32.999098] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:46.718 [2024-11-20 15:25:32.999205] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:46.718 15:25:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@978 -- # wait 87234 00:17:46.718 [2024-11-20 15:25:32.999255] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:46.718 [2024-11-20 15:25:32.999276] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:17:46.977 [2024-11-20 15:25:33.230984] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:47.913 15:25:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@565 -- # return 0 00:17:47.913 00:17:47.913 real 0m5.954s 00:17:47.913 user 0m8.912s 00:17:47.913 sys 0m1.245s 00:17:47.913 15:25:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:47.913 15:25:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:47.913 ************************************ 00:17:47.913 END TEST raid_superblock_test_md_separate 00:17:47.913 ************************************ 00:17:48.173 15:25:34 bdev_raid -- bdev/bdev_raid.sh@1006 -- # '[' true = true ']' 00:17:48.173 15:25:34 bdev_raid -- bdev/bdev_raid.sh@1007 -- # run_test raid_rebuild_test_sb_md_separate raid_rebuild_test raid1 2 true false true 00:17:48.173 15:25:34 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:17:48.173 15:25:34 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:48.173 15:25:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:48.173 ************************************ 00:17:48.173 START TEST raid_rebuild_test_sb_md_separate 00:17:48.173 ************************************ 00:17:48.173 15:25:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:17:48.173 15:25:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:17:48.173 15:25:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:17:48.173 15:25:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:17:48.173 15:25:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:17:48.173 15:25:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # local verify=true 00:17:48.173 15:25:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:17:48.173 15:25:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:48.173 15:25:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:17:48.173 15:25:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:48.173 15:25:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:48.173 15:25:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:17:48.173 15:25:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:48.173 15:25:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:48.173 15:25:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:48.173 15:25:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:17:48.173 15:25:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:17:48.173 15:25:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # local strip_size 00:17:48.173 15:25:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@577 -- # local create_arg 00:17:48.173 15:25:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:17:48.173 15:25:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@579 -- # local data_offset 00:17:48.173 15:25:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:17:48.173 15:25:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:17:48.173 15:25:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:17:48.173 15:25:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:17:48.173 15:25:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@597 -- # raid_pid=87561 00:17:48.173 15:25:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:17:48.173 15:25:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@598 -- # waitforlisten 87561 00:17:48.173 15:25:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@835 -- # '[' -z 87561 ']' 00:17:48.173 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:48.173 15:25:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:48.173 15:25:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:48.173 15:25:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:48.173 15:25:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:48.173 15:25:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:48.173 [2024-11-20 15:25:34.564853] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:17:48.173 [2024-11-20 15:25:34.565181] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:17:48.173 Zero copy mechanism will not be used. 00:17:48.173 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87561 ] 00:17:48.434 [2024-11-20 15:25:34.747729] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:48.434 [2024-11-20 15:25:34.870484] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:48.693 [2024-11-20 15:25:35.083798] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:48.693 [2024-11-20 15:25:35.084093] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:48.952 15:25:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:48.952 15:25:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # return 0 00:17:48.952 15:25:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:48.952 15:25:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1_malloc 00:17:48.952 15:25:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.952 15:25:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:49.212 BaseBdev1_malloc 00:17:49.212 15:25:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.212 15:25:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:49.212 15:25:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.212 15:25:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:49.212 [2024-11-20 15:25:35.454001] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:49.212 [2024-11-20 15:25:35.454256] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:49.212 [2024-11-20 15:25:35.454294] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:49.212 [2024-11-20 15:25:35.454309] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:49.212 [2024-11-20 15:25:35.456534] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:49.212 [2024-11-20 15:25:35.456579] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:49.212 BaseBdev1 00:17:49.212 15:25:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.212 15:25:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:49.212 15:25:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2_malloc 00:17:49.212 15:25:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.212 15:25:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:49.212 BaseBdev2_malloc 00:17:49.212 15:25:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.212 15:25:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:17:49.212 15:25:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.212 15:25:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:49.212 [2024-11-20 15:25:35.513909] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:17:49.212 [2024-11-20 15:25:35.513989] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:49.212 [2024-11-20 15:25:35.514015] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:49.212 [2024-11-20 15:25:35.514032] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:49.212 [2024-11-20 15:25:35.516625] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:49.212 [2024-11-20 15:25:35.516882] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:49.212 BaseBdev2 00:17:49.212 15:25:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.212 15:25:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b spare_malloc 00:17:49.212 15:25:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.212 15:25:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:49.212 spare_malloc 00:17:49.212 15:25:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.212 15:25:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:17:49.212 15:25:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.212 15:25:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:49.212 spare_delay 00:17:49.212 15:25:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.212 15:25:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:49.212 15:25:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.212 15:25:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:49.212 [2024-11-20 15:25:35.602581] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:49.212 [2024-11-20 15:25:35.602671] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:49.212 [2024-11-20 15:25:35.602703] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:17:49.212 [2024-11-20 15:25:35.602732] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:49.212 [2024-11-20 15:25:35.605119] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:49.212 [2024-11-20 15:25:35.605302] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:49.212 spare 00:17:49.212 15:25:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.213 15:25:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:17:49.213 15:25:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.213 15:25:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:49.213 [2024-11-20 15:25:35.614625] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:49.213 [2024-11-20 15:25:35.617309] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:49.213 [2024-11-20 15:25:35.617686] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:49.213 [2024-11-20 15:25:35.617840] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:49.213 [2024-11-20 15:25:35.617967] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:17:49.213 [2024-11-20 15:25:35.618129] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:49.213 [2024-11-20 15:25:35.618144] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:49.213 [2024-11-20 15:25:35.618298] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:49.213 15:25:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.213 15:25:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:49.213 15:25:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:49.213 15:25:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:49.213 15:25:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:49.213 15:25:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:49.213 15:25:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:49.213 15:25:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:49.213 15:25:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:49.213 15:25:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:49.213 15:25:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:49.213 15:25:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:49.213 15:25:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:49.213 15:25:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.213 15:25:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:49.213 15:25:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.213 15:25:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:49.213 "name": "raid_bdev1", 00:17:49.213 "uuid": "5f15695a-aefd-4719-b42b-bb7cf0975a34", 00:17:49.213 "strip_size_kb": 0, 00:17:49.213 "state": "online", 00:17:49.213 "raid_level": "raid1", 00:17:49.213 "superblock": true, 00:17:49.213 "num_base_bdevs": 2, 00:17:49.213 "num_base_bdevs_discovered": 2, 00:17:49.213 "num_base_bdevs_operational": 2, 00:17:49.213 "base_bdevs_list": [ 00:17:49.213 { 00:17:49.213 "name": "BaseBdev1", 00:17:49.213 "uuid": "e7ff3a1f-b3ac-560a-a96c-736aa32bc2bc", 00:17:49.213 "is_configured": true, 00:17:49.213 "data_offset": 256, 00:17:49.213 "data_size": 7936 00:17:49.213 }, 00:17:49.213 { 00:17:49.213 "name": "BaseBdev2", 00:17:49.213 "uuid": "a2834698-ba7c-58fc-b7d3-27c18e839e3e", 00:17:49.213 "is_configured": true, 00:17:49.213 "data_offset": 256, 00:17:49.213 "data_size": 7936 00:17:49.213 } 00:17:49.213 ] 00:17:49.213 }' 00:17:49.213 15:25:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:49.213 15:25:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:49.778 15:25:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:17:49.778 15:25:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:49.778 15:25:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.778 15:25:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:49.778 [2024-11-20 15:25:36.054259] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:49.778 15:25:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.778 15:25:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:17:49.778 15:25:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:49.778 15:25:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.778 15:25:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:49.778 15:25:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:17:49.778 15:25:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.778 15:25:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:17:49.778 15:25:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:17:49.778 15:25:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:17:49.778 15:25:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:17:49.778 15:25:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:17:49.778 15:25:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:49.778 15:25:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:17:49.778 15:25:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:49.778 15:25:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:17:49.778 15:25:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:49.778 15:25:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:17:49.778 15:25:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:49.778 15:25:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:49.778 15:25:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:17:50.036 [2024-11-20 15:25:36.341889] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:50.036 /dev/nbd0 00:17:50.036 15:25:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:50.036 15:25:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:50.036 15:25:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:50.037 15:25:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:17:50.037 15:25:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:50.037 15:25:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:50.037 15:25:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:50.037 15:25:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:17:50.037 15:25:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:50.037 15:25:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:50.037 15:25:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:50.037 1+0 records in 00:17:50.037 1+0 records out 00:17:50.037 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000246817 s, 16.6 MB/s 00:17:50.037 15:25:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:50.037 15:25:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:17:50.037 15:25:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:50.037 15:25:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:50.037 15:25:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:17:50.037 15:25:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:50.037 15:25:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:50.037 15:25:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:17:50.037 15:25:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:17:50.037 15:25:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:17:50.971 7936+0 records in 00:17:50.971 7936+0 records out 00:17:50.971 32505856 bytes (33 MB, 31 MiB) copied, 0.705121 s, 46.1 MB/s 00:17:50.971 15:25:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:17:50.971 15:25:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:50.971 15:25:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:50.971 15:25:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:50.971 15:25:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:17:50.971 15:25:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:50.971 15:25:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:50.971 15:25:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:50.971 [2024-11-20 15:25:37.347183] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:50.971 15:25:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:50.971 15:25:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:50.971 15:25:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:50.971 15:25:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:50.971 15:25:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:50.971 15:25:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:17:50.971 15:25:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:17:50.972 15:25:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:17:50.972 15:25:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.972 15:25:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:50.972 [2024-11-20 15:25:37.363302] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:50.972 15:25:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.972 15:25:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:50.972 15:25:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:50.972 15:25:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:50.972 15:25:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:50.972 15:25:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:50.972 15:25:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:50.972 15:25:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:50.972 15:25:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:50.972 15:25:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:50.972 15:25:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:50.972 15:25:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:50.972 15:25:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:50.972 15:25:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.972 15:25:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:50.972 15:25:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.972 15:25:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:50.972 "name": "raid_bdev1", 00:17:50.972 "uuid": "5f15695a-aefd-4719-b42b-bb7cf0975a34", 00:17:50.972 "strip_size_kb": 0, 00:17:50.972 "state": "online", 00:17:50.972 "raid_level": "raid1", 00:17:50.972 "superblock": true, 00:17:50.972 "num_base_bdevs": 2, 00:17:50.972 "num_base_bdevs_discovered": 1, 00:17:50.972 "num_base_bdevs_operational": 1, 00:17:50.972 "base_bdevs_list": [ 00:17:50.972 { 00:17:50.972 "name": null, 00:17:50.972 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:50.972 "is_configured": false, 00:17:50.972 "data_offset": 0, 00:17:50.972 "data_size": 7936 00:17:50.972 }, 00:17:50.972 { 00:17:50.972 "name": "BaseBdev2", 00:17:50.972 "uuid": "a2834698-ba7c-58fc-b7d3-27c18e839e3e", 00:17:50.972 "is_configured": true, 00:17:50.972 "data_offset": 256, 00:17:50.972 "data_size": 7936 00:17:50.972 } 00:17:50.972 ] 00:17:50.972 }' 00:17:50.972 15:25:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:50.972 15:25:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:51.540 15:25:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:51.540 15:25:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.540 15:25:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:51.540 [2024-11-20 15:25:37.786900] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:51.540 [2024-11-20 15:25:37.802050] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:17:51.540 15:25:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.540 15:25:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@647 -- # sleep 1 00:17:51.540 [2024-11-20 15:25:37.804230] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:52.478 15:25:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:52.478 15:25:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:52.478 15:25:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:52.478 15:25:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:52.478 15:25:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:52.478 15:25:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:52.478 15:25:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.478 15:25:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:52.478 15:25:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:52.478 15:25:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.478 15:25:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:52.478 "name": "raid_bdev1", 00:17:52.478 "uuid": "5f15695a-aefd-4719-b42b-bb7cf0975a34", 00:17:52.478 "strip_size_kb": 0, 00:17:52.478 "state": "online", 00:17:52.478 "raid_level": "raid1", 00:17:52.478 "superblock": true, 00:17:52.478 "num_base_bdevs": 2, 00:17:52.478 "num_base_bdevs_discovered": 2, 00:17:52.478 "num_base_bdevs_operational": 2, 00:17:52.478 "process": { 00:17:52.478 "type": "rebuild", 00:17:52.478 "target": "spare", 00:17:52.478 "progress": { 00:17:52.478 "blocks": 2560, 00:17:52.478 "percent": 32 00:17:52.478 } 00:17:52.478 }, 00:17:52.478 "base_bdevs_list": [ 00:17:52.478 { 00:17:52.478 "name": "spare", 00:17:52.478 "uuid": "e6db5f62-ef79-5f7c-8219-d2b1d10b42fc", 00:17:52.478 "is_configured": true, 00:17:52.478 "data_offset": 256, 00:17:52.478 "data_size": 7936 00:17:52.478 }, 00:17:52.478 { 00:17:52.478 "name": "BaseBdev2", 00:17:52.478 "uuid": "a2834698-ba7c-58fc-b7d3-27c18e839e3e", 00:17:52.478 "is_configured": true, 00:17:52.478 "data_offset": 256, 00:17:52.478 "data_size": 7936 00:17:52.478 } 00:17:52.478 ] 00:17:52.478 }' 00:17:52.478 15:25:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:52.478 15:25:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:52.478 15:25:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:52.478 15:25:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:52.737 15:25:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:52.737 15:25:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.737 15:25:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:52.737 [2024-11-20 15:25:38.964119] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:52.737 [2024-11-20 15:25:39.010550] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:52.737 [2024-11-20 15:25:39.010906] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:52.737 [2024-11-20 15:25:39.010929] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:52.737 [2024-11-20 15:25:39.010944] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:52.737 15:25:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.737 15:25:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:52.737 15:25:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:52.737 15:25:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:52.737 15:25:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:52.737 15:25:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:52.737 15:25:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:52.737 15:25:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:52.737 15:25:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:52.737 15:25:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:52.737 15:25:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:52.737 15:25:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:52.737 15:25:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:52.737 15:25:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.737 15:25:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:52.737 15:25:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.737 15:25:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:52.737 "name": "raid_bdev1", 00:17:52.737 "uuid": "5f15695a-aefd-4719-b42b-bb7cf0975a34", 00:17:52.737 "strip_size_kb": 0, 00:17:52.737 "state": "online", 00:17:52.737 "raid_level": "raid1", 00:17:52.737 "superblock": true, 00:17:52.737 "num_base_bdevs": 2, 00:17:52.737 "num_base_bdevs_discovered": 1, 00:17:52.737 "num_base_bdevs_operational": 1, 00:17:52.737 "base_bdevs_list": [ 00:17:52.737 { 00:17:52.737 "name": null, 00:17:52.737 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:52.737 "is_configured": false, 00:17:52.737 "data_offset": 0, 00:17:52.737 "data_size": 7936 00:17:52.737 }, 00:17:52.737 { 00:17:52.737 "name": "BaseBdev2", 00:17:52.737 "uuid": "a2834698-ba7c-58fc-b7d3-27c18e839e3e", 00:17:52.737 "is_configured": true, 00:17:52.737 "data_offset": 256, 00:17:52.737 "data_size": 7936 00:17:52.737 } 00:17:52.737 ] 00:17:52.737 }' 00:17:52.737 15:25:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:52.737 15:25:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:52.995 15:25:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:52.995 15:25:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:52.995 15:25:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:52.995 15:25:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:52.995 15:25:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:52.995 15:25:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:52.995 15:25:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:52.995 15:25:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.995 15:25:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:53.253 15:25:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.253 15:25:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:53.253 "name": "raid_bdev1", 00:17:53.253 "uuid": "5f15695a-aefd-4719-b42b-bb7cf0975a34", 00:17:53.253 "strip_size_kb": 0, 00:17:53.253 "state": "online", 00:17:53.253 "raid_level": "raid1", 00:17:53.253 "superblock": true, 00:17:53.253 "num_base_bdevs": 2, 00:17:53.253 "num_base_bdevs_discovered": 1, 00:17:53.253 "num_base_bdevs_operational": 1, 00:17:53.253 "base_bdevs_list": [ 00:17:53.253 { 00:17:53.253 "name": null, 00:17:53.253 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:53.253 "is_configured": false, 00:17:53.253 "data_offset": 0, 00:17:53.253 "data_size": 7936 00:17:53.253 }, 00:17:53.253 { 00:17:53.253 "name": "BaseBdev2", 00:17:53.253 "uuid": "a2834698-ba7c-58fc-b7d3-27c18e839e3e", 00:17:53.253 "is_configured": true, 00:17:53.253 "data_offset": 256, 00:17:53.253 "data_size": 7936 00:17:53.253 } 00:17:53.253 ] 00:17:53.253 }' 00:17:53.253 15:25:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:53.253 15:25:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:53.253 15:25:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:53.253 15:25:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:53.253 15:25:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:53.253 15:25:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.253 15:25:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:53.253 [2024-11-20 15:25:39.603205] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:53.253 [2024-11-20 15:25:39.618504] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:17:53.253 15:25:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.253 15:25:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@663 -- # sleep 1 00:17:53.253 [2024-11-20 15:25:39.620828] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:54.186 15:25:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:54.186 15:25:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:54.186 15:25:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:54.186 15:25:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:54.186 15:25:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:54.186 15:25:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:54.186 15:25:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:54.186 15:25:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.186 15:25:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:54.186 15:25:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.445 15:25:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:54.445 "name": "raid_bdev1", 00:17:54.445 "uuid": "5f15695a-aefd-4719-b42b-bb7cf0975a34", 00:17:54.445 "strip_size_kb": 0, 00:17:54.445 "state": "online", 00:17:54.445 "raid_level": "raid1", 00:17:54.445 "superblock": true, 00:17:54.445 "num_base_bdevs": 2, 00:17:54.445 "num_base_bdevs_discovered": 2, 00:17:54.445 "num_base_bdevs_operational": 2, 00:17:54.445 "process": { 00:17:54.445 "type": "rebuild", 00:17:54.445 "target": "spare", 00:17:54.445 "progress": { 00:17:54.445 "blocks": 2560, 00:17:54.445 "percent": 32 00:17:54.445 } 00:17:54.445 }, 00:17:54.445 "base_bdevs_list": [ 00:17:54.445 { 00:17:54.445 "name": "spare", 00:17:54.445 "uuid": "e6db5f62-ef79-5f7c-8219-d2b1d10b42fc", 00:17:54.445 "is_configured": true, 00:17:54.445 "data_offset": 256, 00:17:54.445 "data_size": 7936 00:17:54.445 }, 00:17:54.445 { 00:17:54.445 "name": "BaseBdev2", 00:17:54.445 "uuid": "a2834698-ba7c-58fc-b7d3-27c18e839e3e", 00:17:54.445 "is_configured": true, 00:17:54.445 "data_offset": 256, 00:17:54.445 "data_size": 7936 00:17:54.445 } 00:17:54.445 ] 00:17:54.445 }' 00:17:54.445 15:25:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:54.445 15:25:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:54.445 15:25:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:54.445 15:25:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:54.445 15:25:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:17:54.445 15:25:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:17:54.445 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:17:54.445 15:25:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:17:54.445 15:25:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:17:54.445 15:25:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:17:54.445 15:25:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@706 -- # local timeout=704 00:17:54.445 15:25:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:54.445 15:25:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:54.445 15:25:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:54.445 15:25:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:54.445 15:25:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:54.445 15:25:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:54.445 15:25:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:54.445 15:25:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:54.445 15:25:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.445 15:25:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:54.445 15:25:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.445 15:25:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:54.445 "name": "raid_bdev1", 00:17:54.445 "uuid": "5f15695a-aefd-4719-b42b-bb7cf0975a34", 00:17:54.445 "strip_size_kb": 0, 00:17:54.445 "state": "online", 00:17:54.445 "raid_level": "raid1", 00:17:54.445 "superblock": true, 00:17:54.445 "num_base_bdevs": 2, 00:17:54.445 "num_base_bdevs_discovered": 2, 00:17:54.445 "num_base_bdevs_operational": 2, 00:17:54.445 "process": { 00:17:54.445 "type": "rebuild", 00:17:54.445 "target": "spare", 00:17:54.445 "progress": { 00:17:54.445 "blocks": 2816, 00:17:54.445 "percent": 35 00:17:54.445 } 00:17:54.445 }, 00:17:54.445 "base_bdevs_list": [ 00:17:54.445 { 00:17:54.445 "name": "spare", 00:17:54.445 "uuid": "e6db5f62-ef79-5f7c-8219-d2b1d10b42fc", 00:17:54.445 "is_configured": true, 00:17:54.445 "data_offset": 256, 00:17:54.445 "data_size": 7936 00:17:54.445 }, 00:17:54.445 { 00:17:54.445 "name": "BaseBdev2", 00:17:54.445 "uuid": "a2834698-ba7c-58fc-b7d3-27c18e839e3e", 00:17:54.445 "is_configured": true, 00:17:54.445 "data_offset": 256, 00:17:54.445 "data_size": 7936 00:17:54.445 } 00:17:54.445 ] 00:17:54.445 }' 00:17:54.445 15:25:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:54.445 15:25:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:54.445 15:25:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:54.445 15:25:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:54.445 15:25:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:55.831 15:25:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:55.831 15:25:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:55.831 15:25:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:55.831 15:25:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:55.831 15:25:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:55.831 15:25:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:55.831 15:25:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:55.831 15:25:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:55.831 15:25:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.831 15:25:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:55.831 15:25:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.831 15:25:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:55.831 "name": "raid_bdev1", 00:17:55.831 "uuid": "5f15695a-aefd-4719-b42b-bb7cf0975a34", 00:17:55.831 "strip_size_kb": 0, 00:17:55.831 "state": "online", 00:17:55.831 "raid_level": "raid1", 00:17:55.831 "superblock": true, 00:17:55.831 "num_base_bdevs": 2, 00:17:55.831 "num_base_bdevs_discovered": 2, 00:17:55.831 "num_base_bdevs_operational": 2, 00:17:55.831 "process": { 00:17:55.831 "type": "rebuild", 00:17:55.831 "target": "spare", 00:17:55.831 "progress": { 00:17:55.831 "blocks": 5632, 00:17:55.831 "percent": 70 00:17:55.831 } 00:17:55.831 }, 00:17:55.831 "base_bdevs_list": [ 00:17:55.831 { 00:17:55.831 "name": "spare", 00:17:55.831 "uuid": "e6db5f62-ef79-5f7c-8219-d2b1d10b42fc", 00:17:55.831 "is_configured": true, 00:17:55.831 "data_offset": 256, 00:17:55.831 "data_size": 7936 00:17:55.831 }, 00:17:55.831 { 00:17:55.831 "name": "BaseBdev2", 00:17:55.831 "uuid": "a2834698-ba7c-58fc-b7d3-27c18e839e3e", 00:17:55.831 "is_configured": true, 00:17:55.831 "data_offset": 256, 00:17:55.831 "data_size": 7936 00:17:55.831 } 00:17:55.831 ] 00:17:55.831 }' 00:17:55.831 15:25:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:55.831 15:25:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:55.831 15:25:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:55.831 15:25:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:55.831 15:25:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:56.404 [2024-11-20 15:25:42.735575] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:17:56.404 [2024-11-20 15:25:42.735683] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:17:56.404 [2024-11-20 15:25:42.735828] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:56.663 15:25:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:56.663 15:25:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:56.663 15:25:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:56.663 15:25:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:56.663 15:25:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:56.663 15:25:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:56.663 15:25:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:56.663 15:25:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:56.663 15:25:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.663 15:25:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:56.663 15:25:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.663 15:25:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:56.663 "name": "raid_bdev1", 00:17:56.663 "uuid": "5f15695a-aefd-4719-b42b-bb7cf0975a34", 00:17:56.663 "strip_size_kb": 0, 00:17:56.663 "state": "online", 00:17:56.663 "raid_level": "raid1", 00:17:56.663 "superblock": true, 00:17:56.663 "num_base_bdevs": 2, 00:17:56.663 "num_base_bdevs_discovered": 2, 00:17:56.663 "num_base_bdevs_operational": 2, 00:17:56.663 "base_bdevs_list": [ 00:17:56.663 { 00:17:56.663 "name": "spare", 00:17:56.663 "uuid": "e6db5f62-ef79-5f7c-8219-d2b1d10b42fc", 00:17:56.663 "is_configured": true, 00:17:56.663 "data_offset": 256, 00:17:56.663 "data_size": 7936 00:17:56.663 }, 00:17:56.663 { 00:17:56.663 "name": "BaseBdev2", 00:17:56.663 "uuid": "a2834698-ba7c-58fc-b7d3-27c18e839e3e", 00:17:56.663 "is_configured": true, 00:17:56.663 "data_offset": 256, 00:17:56.663 "data_size": 7936 00:17:56.663 } 00:17:56.663 ] 00:17:56.663 }' 00:17:56.663 15:25:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:56.663 15:25:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:17:56.663 15:25:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:56.663 15:25:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:17:56.663 15:25:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@709 -- # break 00:17:56.663 15:25:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:56.663 15:25:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:56.663 15:25:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:56.663 15:25:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:56.663 15:25:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:56.663 15:25:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:56.663 15:25:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:56.663 15:25:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.663 15:25:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:56.663 15:25:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.922 15:25:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:56.922 "name": "raid_bdev1", 00:17:56.922 "uuid": "5f15695a-aefd-4719-b42b-bb7cf0975a34", 00:17:56.922 "strip_size_kb": 0, 00:17:56.922 "state": "online", 00:17:56.922 "raid_level": "raid1", 00:17:56.922 "superblock": true, 00:17:56.922 "num_base_bdevs": 2, 00:17:56.922 "num_base_bdevs_discovered": 2, 00:17:56.922 "num_base_bdevs_operational": 2, 00:17:56.922 "base_bdevs_list": [ 00:17:56.922 { 00:17:56.922 "name": "spare", 00:17:56.922 "uuid": "e6db5f62-ef79-5f7c-8219-d2b1d10b42fc", 00:17:56.922 "is_configured": true, 00:17:56.922 "data_offset": 256, 00:17:56.922 "data_size": 7936 00:17:56.922 }, 00:17:56.922 { 00:17:56.922 "name": "BaseBdev2", 00:17:56.922 "uuid": "a2834698-ba7c-58fc-b7d3-27c18e839e3e", 00:17:56.922 "is_configured": true, 00:17:56.922 "data_offset": 256, 00:17:56.922 "data_size": 7936 00:17:56.922 } 00:17:56.922 ] 00:17:56.922 }' 00:17:56.922 15:25:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:56.922 15:25:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:56.922 15:25:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:56.922 15:25:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:56.922 15:25:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:56.922 15:25:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:56.922 15:25:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:56.922 15:25:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:56.922 15:25:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:56.922 15:25:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:56.922 15:25:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:56.922 15:25:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:56.922 15:25:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:56.922 15:25:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:56.922 15:25:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:56.922 15:25:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:56.922 15:25:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.922 15:25:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:56.922 15:25:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.922 15:25:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:56.922 "name": "raid_bdev1", 00:17:56.922 "uuid": "5f15695a-aefd-4719-b42b-bb7cf0975a34", 00:17:56.922 "strip_size_kb": 0, 00:17:56.922 "state": "online", 00:17:56.922 "raid_level": "raid1", 00:17:56.922 "superblock": true, 00:17:56.922 "num_base_bdevs": 2, 00:17:56.922 "num_base_bdevs_discovered": 2, 00:17:56.922 "num_base_bdevs_operational": 2, 00:17:56.922 "base_bdevs_list": [ 00:17:56.922 { 00:17:56.922 "name": "spare", 00:17:56.922 "uuid": "e6db5f62-ef79-5f7c-8219-d2b1d10b42fc", 00:17:56.922 "is_configured": true, 00:17:56.922 "data_offset": 256, 00:17:56.922 "data_size": 7936 00:17:56.922 }, 00:17:56.922 { 00:17:56.922 "name": "BaseBdev2", 00:17:56.922 "uuid": "a2834698-ba7c-58fc-b7d3-27c18e839e3e", 00:17:56.922 "is_configured": true, 00:17:56.922 "data_offset": 256, 00:17:56.922 "data_size": 7936 00:17:56.922 } 00:17:56.922 ] 00:17:56.922 }' 00:17:56.922 15:25:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:56.922 15:25:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:57.181 15:25:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:57.181 15:25:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.181 15:25:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:57.181 [2024-11-20 15:25:43.635034] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:57.181 [2024-11-20 15:25:43.635265] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:57.181 [2024-11-20 15:25:43.635381] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:57.181 [2024-11-20 15:25:43.635457] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:57.181 [2024-11-20 15:25:43.635470] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:57.181 15:25:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.181 15:25:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # jq length 00:17:57.181 15:25:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:57.181 15:25:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.181 15:25:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:57.181 15:25:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.439 15:25:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:17:57.439 15:25:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:17:57.439 15:25:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:17:57.439 15:25:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:17:57.439 15:25:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:57.439 15:25:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:17:57.439 15:25:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:57.439 15:25:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:57.439 15:25:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:57.439 15:25:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:17:57.439 15:25:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:57.439 15:25:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:57.439 15:25:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:17:57.439 /dev/nbd0 00:17:57.698 15:25:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:57.698 15:25:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:57.698 15:25:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:57.698 15:25:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:17:57.698 15:25:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:57.698 15:25:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:57.698 15:25:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:57.698 15:25:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:17:57.698 15:25:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:57.698 15:25:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:57.698 15:25:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:57.698 1+0 records in 00:17:57.698 1+0 records out 00:17:57.698 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000257085 s, 15.9 MB/s 00:17:57.698 15:25:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:57.698 15:25:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:17:57.698 15:25:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:57.698 15:25:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:57.698 15:25:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:17:57.698 15:25:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:57.698 15:25:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:57.698 15:25:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:17:57.956 /dev/nbd1 00:17:57.956 15:25:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:57.956 15:25:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:57.956 15:25:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:17:57.956 15:25:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:17:57.956 15:25:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:57.956 15:25:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:57.956 15:25:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:17:57.956 15:25:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:17:57.956 15:25:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:57.956 15:25:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:57.956 15:25:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:57.956 1+0 records in 00:17:57.956 1+0 records out 00:17:57.956 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000325203 s, 12.6 MB/s 00:17:57.956 15:25:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:57.956 15:25:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:17:57.956 15:25:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:57.956 15:25:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:57.956 15:25:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:17:57.956 15:25:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:57.956 15:25:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:57.956 15:25:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:17:57.956 15:25:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:17:57.956 15:25:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:57.956 15:25:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:57.956 15:25:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:57.956 15:25:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:17:57.956 15:25:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:57.956 15:25:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:58.214 15:25:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:58.214 15:25:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:58.214 15:25:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:58.214 15:25:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:58.214 15:25:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:58.214 15:25:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:58.214 15:25:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:17:58.214 15:25:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:17:58.214 15:25:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:58.214 15:25:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:17:58.472 15:25:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:58.472 15:25:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:58.472 15:25:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:58.472 15:25:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:58.472 15:25:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:58.472 15:25:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:58.472 15:25:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:17:58.472 15:25:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:17:58.472 15:25:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:17:58.472 15:25:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:17:58.472 15:25:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.472 15:25:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:58.472 15:25:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.472 15:25:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:58.472 15:25:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.472 15:25:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:58.731 [2024-11-20 15:25:44.956610] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:58.731 [2024-11-20 15:25:44.956697] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:58.731 [2024-11-20 15:25:44.956728] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:17:58.731 [2024-11-20 15:25:44.956740] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:58.731 [2024-11-20 15:25:44.959150] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:58.731 [2024-11-20 15:25:44.959196] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:58.731 [2024-11-20 15:25:44.959271] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:58.731 [2024-11-20 15:25:44.959332] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:58.731 [2024-11-20 15:25:44.959482] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:58.731 spare 00:17:58.731 15:25:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.731 15:25:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:17:58.731 15:25:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.731 15:25:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:58.731 [2024-11-20 15:25:45.059413] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:17:58.731 [2024-11-20 15:25:45.059476] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:58.731 [2024-11-20 15:25:45.059610] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:17:58.731 [2024-11-20 15:25:45.059802] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:17:58.731 [2024-11-20 15:25:45.059821] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:17:58.731 [2024-11-20 15:25:45.059971] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:58.731 15:25:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.731 15:25:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:58.731 15:25:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:58.731 15:25:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:58.731 15:25:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:58.731 15:25:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:58.731 15:25:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:58.731 15:25:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:58.731 15:25:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:58.731 15:25:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:58.731 15:25:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:58.731 15:25:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:58.731 15:25:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:58.731 15:25:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.731 15:25:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:58.731 15:25:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.731 15:25:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:58.731 "name": "raid_bdev1", 00:17:58.731 "uuid": "5f15695a-aefd-4719-b42b-bb7cf0975a34", 00:17:58.731 "strip_size_kb": 0, 00:17:58.731 "state": "online", 00:17:58.731 "raid_level": "raid1", 00:17:58.731 "superblock": true, 00:17:58.731 "num_base_bdevs": 2, 00:17:58.731 "num_base_bdevs_discovered": 2, 00:17:58.731 "num_base_bdevs_operational": 2, 00:17:58.731 "base_bdevs_list": [ 00:17:58.731 { 00:17:58.731 "name": "spare", 00:17:58.731 "uuid": "e6db5f62-ef79-5f7c-8219-d2b1d10b42fc", 00:17:58.731 "is_configured": true, 00:17:58.731 "data_offset": 256, 00:17:58.731 "data_size": 7936 00:17:58.731 }, 00:17:58.731 { 00:17:58.731 "name": "BaseBdev2", 00:17:58.731 "uuid": "a2834698-ba7c-58fc-b7d3-27c18e839e3e", 00:17:58.731 "is_configured": true, 00:17:58.731 "data_offset": 256, 00:17:58.731 "data_size": 7936 00:17:58.731 } 00:17:58.731 ] 00:17:58.731 }' 00:17:58.731 15:25:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:58.731 15:25:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:59.299 15:25:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:59.299 15:25:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:59.299 15:25:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:59.299 15:25:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:59.299 15:25:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:59.299 15:25:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:59.299 15:25:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:59.299 15:25:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.299 15:25:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:59.299 15:25:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.299 15:25:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:59.299 "name": "raid_bdev1", 00:17:59.299 "uuid": "5f15695a-aefd-4719-b42b-bb7cf0975a34", 00:17:59.299 "strip_size_kb": 0, 00:17:59.299 "state": "online", 00:17:59.299 "raid_level": "raid1", 00:17:59.299 "superblock": true, 00:17:59.299 "num_base_bdevs": 2, 00:17:59.299 "num_base_bdevs_discovered": 2, 00:17:59.299 "num_base_bdevs_operational": 2, 00:17:59.299 "base_bdevs_list": [ 00:17:59.299 { 00:17:59.299 "name": "spare", 00:17:59.299 "uuid": "e6db5f62-ef79-5f7c-8219-d2b1d10b42fc", 00:17:59.299 "is_configured": true, 00:17:59.299 "data_offset": 256, 00:17:59.299 "data_size": 7936 00:17:59.299 }, 00:17:59.299 { 00:17:59.299 "name": "BaseBdev2", 00:17:59.299 "uuid": "a2834698-ba7c-58fc-b7d3-27c18e839e3e", 00:17:59.299 "is_configured": true, 00:17:59.299 "data_offset": 256, 00:17:59.299 "data_size": 7936 00:17:59.299 } 00:17:59.299 ] 00:17:59.299 }' 00:17:59.299 15:25:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:59.299 15:25:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:59.299 15:25:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:59.299 15:25:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:59.299 15:25:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:59.299 15:25:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:17:59.299 15:25:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.299 15:25:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:59.299 15:25:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.299 15:25:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:17:59.299 15:25:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:59.299 15:25:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.299 15:25:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:59.299 [2024-11-20 15:25:45.703675] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:59.299 15:25:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.299 15:25:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:59.299 15:25:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:59.299 15:25:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:59.299 15:25:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:59.299 15:25:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:59.299 15:25:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:59.299 15:25:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:59.299 15:25:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:59.299 15:25:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:59.299 15:25:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:59.299 15:25:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:59.299 15:25:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:59.299 15:25:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.299 15:25:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:59.299 15:25:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.299 15:25:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:59.299 "name": "raid_bdev1", 00:17:59.299 "uuid": "5f15695a-aefd-4719-b42b-bb7cf0975a34", 00:17:59.299 "strip_size_kb": 0, 00:17:59.299 "state": "online", 00:17:59.299 "raid_level": "raid1", 00:17:59.299 "superblock": true, 00:17:59.299 "num_base_bdevs": 2, 00:17:59.299 "num_base_bdevs_discovered": 1, 00:17:59.299 "num_base_bdevs_operational": 1, 00:17:59.299 "base_bdevs_list": [ 00:17:59.299 { 00:17:59.299 "name": null, 00:17:59.299 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:59.299 "is_configured": false, 00:17:59.299 "data_offset": 0, 00:17:59.299 "data_size": 7936 00:17:59.299 }, 00:17:59.299 { 00:17:59.299 "name": "BaseBdev2", 00:17:59.299 "uuid": "a2834698-ba7c-58fc-b7d3-27c18e839e3e", 00:17:59.299 "is_configured": true, 00:17:59.299 "data_offset": 256, 00:17:59.299 "data_size": 7936 00:17:59.299 } 00:17:59.299 ] 00:17:59.299 }' 00:17:59.299 15:25:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:59.299 15:25:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:59.866 15:25:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:59.866 15:25:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.866 15:25:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:59.866 [2024-11-20 15:25:46.163036] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:59.866 [2024-11-20 15:25:46.163409] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:59.866 [2024-11-20 15:25:46.163438] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:59.866 [2024-11-20 15:25:46.163489] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:59.866 [2024-11-20 15:25:46.177832] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:17:59.866 15:25:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.866 15:25:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@757 -- # sleep 1 00:17:59.866 [2024-11-20 15:25:46.180040] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:00.801 15:25:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:00.801 15:25:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:00.801 15:25:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:00.801 15:25:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:00.801 15:25:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:00.801 15:25:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:00.801 15:25:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:00.801 15:25:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.801 15:25:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:00.801 15:25:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.801 15:25:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:00.801 "name": "raid_bdev1", 00:18:00.801 "uuid": "5f15695a-aefd-4719-b42b-bb7cf0975a34", 00:18:00.801 "strip_size_kb": 0, 00:18:00.801 "state": "online", 00:18:00.801 "raid_level": "raid1", 00:18:00.801 "superblock": true, 00:18:00.801 "num_base_bdevs": 2, 00:18:00.801 "num_base_bdevs_discovered": 2, 00:18:00.801 "num_base_bdevs_operational": 2, 00:18:00.801 "process": { 00:18:00.801 "type": "rebuild", 00:18:00.801 "target": "spare", 00:18:00.801 "progress": { 00:18:00.801 "blocks": 2560, 00:18:00.801 "percent": 32 00:18:00.801 } 00:18:00.801 }, 00:18:00.801 "base_bdevs_list": [ 00:18:00.801 { 00:18:00.801 "name": "spare", 00:18:00.801 "uuid": "e6db5f62-ef79-5f7c-8219-d2b1d10b42fc", 00:18:00.801 "is_configured": true, 00:18:00.801 "data_offset": 256, 00:18:00.801 "data_size": 7936 00:18:00.801 }, 00:18:00.801 { 00:18:00.801 "name": "BaseBdev2", 00:18:00.801 "uuid": "a2834698-ba7c-58fc-b7d3-27c18e839e3e", 00:18:00.801 "is_configured": true, 00:18:00.801 "data_offset": 256, 00:18:00.801 "data_size": 7936 00:18:00.801 } 00:18:00.801 ] 00:18:00.801 }' 00:18:00.801 15:25:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:00.801 15:25:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:00.801 15:25:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:01.060 15:25:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:01.060 15:25:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:18:01.060 15:25:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.060 15:25:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:01.060 [2024-11-20 15:25:47.319783] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:01.060 [2024-11-20 15:25:47.385808] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:01.060 [2024-11-20 15:25:47.385911] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:01.060 [2024-11-20 15:25:47.385929] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:01.060 [2024-11-20 15:25:47.385953] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:01.060 15:25:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.060 15:25:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:01.060 15:25:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:01.060 15:25:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:01.060 15:25:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:01.060 15:25:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:01.060 15:25:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:01.060 15:25:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:01.060 15:25:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:01.060 15:25:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:01.060 15:25:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:01.060 15:25:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:01.060 15:25:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.060 15:25:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:01.060 15:25:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:01.060 15:25:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.060 15:25:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:01.060 "name": "raid_bdev1", 00:18:01.060 "uuid": "5f15695a-aefd-4719-b42b-bb7cf0975a34", 00:18:01.060 "strip_size_kb": 0, 00:18:01.060 "state": "online", 00:18:01.060 "raid_level": "raid1", 00:18:01.060 "superblock": true, 00:18:01.060 "num_base_bdevs": 2, 00:18:01.060 "num_base_bdevs_discovered": 1, 00:18:01.060 "num_base_bdevs_operational": 1, 00:18:01.060 "base_bdevs_list": [ 00:18:01.060 { 00:18:01.060 "name": null, 00:18:01.060 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:01.060 "is_configured": false, 00:18:01.060 "data_offset": 0, 00:18:01.060 "data_size": 7936 00:18:01.060 }, 00:18:01.060 { 00:18:01.060 "name": "BaseBdev2", 00:18:01.060 "uuid": "a2834698-ba7c-58fc-b7d3-27c18e839e3e", 00:18:01.060 "is_configured": true, 00:18:01.060 "data_offset": 256, 00:18:01.060 "data_size": 7936 00:18:01.060 } 00:18:01.060 ] 00:18:01.060 }' 00:18:01.060 15:25:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:01.060 15:25:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:01.669 15:25:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:01.670 15:25:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.670 15:25:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:01.670 [2024-11-20 15:25:47.829867] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:01.670 [2024-11-20 15:25:47.829952] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:01.670 [2024-11-20 15:25:47.829981] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:18:01.670 [2024-11-20 15:25:47.829996] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:01.670 [2024-11-20 15:25:47.830281] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:01.670 [2024-11-20 15:25:47.830302] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:01.670 [2024-11-20 15:25:47.830369] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:01.670 [2024-11-20 15:25:47.830386] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:01.670 [2024-11-20 15:25:47.830399] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:01.670 [2024-11-20 15:25:47.830424] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:01.670 [2024-11-20 15:25:47.845474] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:18:01.670 spare 00:18:01.670 15:25:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.670 15:25:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@764 -- # sleep 1 00:18:01.670 [2024-11-20 15:25:47.847739] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:02.605 15:25:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:02.606 15:25:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:02.606 15:25:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:02.606 15:25:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:02.606 15:25:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:02.606 15:25:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:02.606 15:25:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:02.606 15:25:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.606 15:25:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:02.606 15:25:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.606 15:25:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:02.606 "name": "raid_bdev1", 00:18:02.606 "uuid": "5f15695a-aefd-4719-b42b-bb7cf0975a34", 00:18:02.606 "strip_size_kb": 0, 00:18:02.606 "state": "online", 00:18:02.606 "raid_level": "raid1", 00:18:02.606 "superblock": true, 00:18:02.606 "num_base_bdevs": 2, 00:18:02.606 "num_base_bdevs_discovered": 2, 00:18:02.606 "num_base_bdevs_operational": 2, 00:18:02.606 "process": { 00:18:02.606 "type": "rebuild", 00:18:02.606 "target": "spare", 00:18:02.606 "progress": { 00:18:02.606 "blocks": 2560, 00:18:02.606 "percent": 32 00:18:02.606 } 00:18:02.606 }, 00:18:02.606 "base_bdevs_list": [ 00:18:02.606 { 00:18:02.606 "name": "spare", 00:18:02.606 "uuid": "e6db5f62-ef79-5f7c-8219-d2b1d10b42fc", 00:18:02.606 "is_configured": true, 00:18:02.606 "data_offset": 256, 00:18:02.606 "data_size": 7936 00:18:02.606 }, 00:18:02.606 { 00:18:02.606 "name": "BaseBdev2", 00:18:02.606 "uuid": "a2834698-ba7c-58fc-b7d3-27c18e839e3e", 00:18:02.606 "is_configured": true, 00:18:02.606 "data_offset": 256, 00:18:02.606 "data_size": 7936 00:18:02.606 } 00:18:02.606 ] 00:18:02.606 }' 00:18:02.606 15:25:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:02.606 15:25:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:02.606 15:25:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:02.606 15:25:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:02.606 15:25:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:18:02.606 15:25:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.606 15:25:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:02.606 [2024-11-20 15:25:49.007730] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:02.606 [2024-11-20 15:25:49.053831] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:02.606 [2024-11-20 15:25:49.053920] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:02.606 [2024-11-20 15:25:49.053941] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:02.606 [2024-11-20 15:25:49.053950] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:02.606 15:25:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.606 15:25:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:02.606 15:25:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:02.606 15:25:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:02.606 15:25:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:02.606 15:25:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:02.606 15:25:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:02.606 15:25:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:02.606 15:25:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:02.606 15:25:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:02.606 15:25:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:02.606 15:25:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:02.606 15:25:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:02.606 15:25:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.606 15:25:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:02.862 15:25:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.862 15:25:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:02.862 "name": "raid_bdev1", 00:18:02.862 "uuid": "5f15695a-aefd-4719-b42b-bb7cf0975a34", 00:18:02.862 "strip_size_kb": 0, 00:18:02.862 "state": "online", 00:18:02.862 "raid_level": "raid1", 00:18:02.862 "superblock": true, 00:18:02.862 "num_base_bdevs": 2, 00:18:02.863 "num_base_bdevs_discovered": 1, 00:18:02.863 "num_base_bdevs_operational": 1, 00:18:02.863 "base_bdevs_list": [ 00:18:02.863 { 00:18:02.863 "name": null, 00:18:02.863 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:02.863 "is_configured": false, 00:18:02.863 "data_offset": 0, 00:18:02.863 "data_size": 7936 00:18:02.863 }, 00:18:02.863 { 00:18:02.863 "name": "BaseBdev2", 00:18:02.863 "uuid": "a2834698-ba7c-58fc-b7d3-27c18e839e3e", 00:18:02.863 "is_configured": true, 00:18:02.863 "data_offset": 256, 00:18:02.863 "data_size": 7936 00:18:02.863 } 00:18:02.863 ] 00:18:02.863 }' 00:18:02.863 15:25:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:02.863 15:25:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:03.120 15:25:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:03.120 15:25:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:03.120 15:25:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:03.120 15:25:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:03.120 15:25:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:03.120 15:25:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:03.120 15:25:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:03.120 15:25:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.120 15:25:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:03.120 15:25:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.120 15:25:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:03.120 "name": "raid_bdev1", 00:18:03.120 "uuid": "5f15695a-aefd-4719-b42b-bb7cf0975a34", 00:18:03.120 "strip_size_kb": 0, 00:18:03.120 "state": "online", 00:18:03.120 "raid_level": "raid1", 00:18:03.120 "superblock": true, 00:18:03.120 "num_base_bdevs": 2, 00:18:03.120 "num_base_bdevs_discovered": 1, 00:18:03.120 "num_base_bdevs_operational": 1, 00:18:03.120 "base_bdevs_list": [ 00:18:03.120 { 00:18:03.120 "name": null, 00:18:03.120 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:03.120 "is_configured": false, 00:18:03.120 "data_offset": 0, 00:18:03.120 "data_size": 7936 00:18:03.120 }, 00:18:03.120 { 00:18:03.120 "name": "BaseBdev2", 00:18:03.120 "uuid": "a2834698-ba7c-58fc-b7d3-27c18e839e3e", 00:18:03.120 "is_configured": true, 00:18:03.120 "data_offset": 256, 00:18:03.120 "data_size": 7936 00:18:03.120 } 00:18:03.120 ] 00:18:03.120 }' 00:18:03.120 15:25:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:03.120 15:25:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:03.120 15:25:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:03.377 15:25:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:03.377 15:25:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:18:03.377 15:25:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.377 15:25:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:03.377 15:25:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.377 15:25:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:03.377 15:25:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.377 15:25:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:03.377 [2024-11-20 15:25:49.646851] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:03.377 [2024-11-20 15:25:49.647045] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:03.377 [2024-11-20 15:25:49.647086] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:18:03.377 [2024-11-20 15:25:49.647098] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:03.377 [2024-11-20 15:25:49.647325] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:03.377 [2024-11-20 15:25:49.647340] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:03.377 [2024-11-20 15:25:49.647405] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:18:03.377 [2024-11-20 15:25:49.647420] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:03.377 [2024-11-20 15:25:49.647433] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:03.377 [2024-11-20 15:25:49.647445] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:18:03.377 BaseBdev1 00:18:03.377 15:25:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.377 15:25:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@775 -- # sleep 1 00:18:04.311 15:25:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:04.311 15:25:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:04.311 15:25:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:04.311 15:25:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:04.311 15:25:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:04.311 15:25:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:04.311 15:25:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:04.311 15:25:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:04.311 15:25:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:04.311 15:25:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:04.311 15:25:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:04.311 15:25:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:04.311 15:25:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.311 15:25:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:04.311 15:25:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.311 15:25:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:04.311 "name": "raid_bdev1", 00:18:04.311 "uuid": "5f15695a-aefd-4719-b42b-bb7cf0975a34", 00:18:04.311 "strip_size_kb": 0, 00:18:04.311 "state": "online", 00:18:04.311 "raid_level": "raid1", 00:18:04.311 "superblock": true, 00:18:04.311 "num_base_bdevs": 2, 00:18:04.311 "num_base_bdevs_discovered": 1, 00:18:04.311 "num_base_bdevs_operational": 1, 00:18:04.311 "base_bdevs_list": [ 00:18:04.311 { 00:18:04.311 "name": null, 00:18:04.311 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:04.311 "is_configured": false, 00:18:04.311 "data_offset": 0, 00:18:04.311 "data_size": 7936 00:18:04.311 }, 00:18:04.311 { 00:18:04.311 "name": "BaseBdev2", 00:18:04.311 "uuid": "a2834698-ba7c-58fc-b7d3-27c18e839e3e", 00:18:04.311 "is_configured": true, 00:18:04.311 "data_offset": 256, 00:18:04.311 "data_size": 7936 00:18:04.311 } 00:18:04.311 ] 00:18:04.311 }' 00:18:04.311 15:25:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:04.311 15:25:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:04.878 15:25:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:04.878 15:25:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:04.878 15:25:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:04.878 15:25:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:04.878 15:25:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:04.878 15:25:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:04.878 15:25:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:04.878 15:25:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.878 15:25:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:04.878 15:25:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.878 15:25:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:04.878 "name": "raid_bdev1", 00:18:04.878 "uuid": "5f15695a-aefd-4719-b42b-bb7cf0975a34", 00:18:04.878 "strip_size_kb": 0, 00:18:04.878 "state": "online", 00:18:04.878 "raid_level": "raid1", 00:18:04.878 "superblock": true, 00:18:04.878 "num_base_bdevs": 2, 00:18:04.878 "num_base_bdevs_discovered": 1, 00:18:04.878 "num_base_bdevs_operational": 1, 00:18:04.878 "base_bdevs_list": [ 00:18:04.878 { 00:18:04.878 "name": null, 00:18:04.878 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:04.878 "is_configured": false, 00:18:04.878 "data_offset": 0, 00:18:04.878 "data_size": 7936 00:18:04.878 }, 00:18:04.878 { 00:18:04.878 "name": "BaseBdev2", 00:18:04.878 "uuid": "a2834698-ba7c-58fc-b7d3-27c18e839e3e", 00:18:04.878 "is_configured": true, 00:18:04.878 "data_offset": 256, 00:18:04.878 "data_size": 7936 00:18:04.878 } 00:18:04.878 ] 00:18:04.878 }' 00:18:04.878 15:25:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:04.878 15:25:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:04.878 15:25:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:04.878 15:25:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:04.878 15:25:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:04.878 15:25:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@652 -- # local es=0 00:18:04.878 15:25:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:04.878 15:25:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:04.878 15:25:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:04.878 15:25:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:04.878 15:25:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:04.878 15:25:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:04.878 15:25:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.878 15:25:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:04.878 [2024-11-20 15:25:51.214904] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:04.878 [2024-11-20 15:25:51.215202] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:04.878 [2024-11-20 15:25:51.215337] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:04.878 request: 00:18:04.878 { 00:18:04.878 "base_bdev": "BaseBdev1", 00:18:04.878 "raid_bdev": "raid_bdev1", 00:18:04.878 "method": "bdev_raid_add_base_bdev", 00:18:04.878 "req_id": 1 00:18:04.878 } 00:18:04.879 Got JSON-RPC error response 00:18:04.879 response: 00:18:04.879 { 00:18:04.879 "code": -22, 00:18:04.879 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:18:04.879 } 00:18:04.879 15:25:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:04.879 15:25:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@655 -- # es=1 00:18:04.879 15:25:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:04.879 15:25:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:04.879 15:25:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:04.879 15:25:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@779 -- # sleep 1 00:18:05.816 15:25:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:05.816 15:25:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:05.816 15:25:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:05.816 15:25:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:05.816 15:25:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:05.816 15:25:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:05.816 15:25:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:05.816 15:25:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:05.816 15:25:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:05.816 15:25:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:05.816 15:25:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:05.816 15:25:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:05.816 15:25:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.816 15:25:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:05.816 15:25:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.816 15:25:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:05.816 "name": "raid_bdev1", 00:18:05.816 "uuid": "5f15695a-aefd-4719-b42b-bb7cf0975a34", 00:18:05.816 "strip_size_kb": 0, 00:18:05.816 "state": "online", 00:18:05.816 "raid_level": "raid1", 00:18:05.816 "superblock": true, 00:18:05.816 "num_base_bdevs": 2, 00:18:05.816 "num_base_bdevs_discovered": 1, 00:18:05.816 "num_base_bdevs_operational": 1, 00:18:05.816 "base_bdevs_list": [ 00:18:05.816 { 00:18:05.816 "name": null, 00:18:05.816 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:05.816 "is_configured": false, 00:18:05.816 "data_offset": 0, 00:18:05.816 "data_size": 7936 00:18:05.816 }, 00:18:05.816 { 00:18:05.816 "name": "BaseBdev2", 00:18:05.816 "uuid": "a2834698-ba7c-58fc-b7d3-27c18e839e3e", 00:18:05.816 "is_configured": true, 00:18:05.816 "data_offset": 256, 00:18:05.816 "data_size": 7936 00:18:05.816 } 00:18:05.816 ] 00:18:05.816 }' 00:18:05.816 15:25:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:05.816 15:25:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:06.384 15:25:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:06.384 15:25:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:06.385 15:25:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:06.385 15:25:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:06.385 15:25:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:06.385 15:25:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:06.385 15:25:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.385 15:25:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:06.385 15:25:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:06.385 15:25:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.385 15:25:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:06.385 "name": "raid_bdev1", 00:18:06.385 "uuid": "5f15695a-aefd-4719-b42b-bb7cf0975a34", 00:18:06.385 "strip_size_kb": 0, 00:18:06.385 "state": "online", 00:18:06.385 "raid_level": "raid1", 00:18:06.385 "superblock": true, 00:18:06.385 "num_base_bdevs": 2, 00:18:06.385 "num_base_bdevs_discovered": 1, 00:18:06.385 "num_base_bdevs_operational": 1, 00:18:06.385 "base_bdevs_list": [ 00:18:06.385 { 00:18:06.385 "name": null, 00:18:06.385 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:06.385 "is_configured": false, 00:18:06.385 "data_offset": 0, 00:18:06.385 "data_size": 7936 00:18:06.385 }, 00:18:06.385 { 00:18:06.385 "name": "BaseBdev2", 00:18:06.385 "uuid": "a2834698-ba7c-58fc-b7d3-27c18e839e3e", 00:18:06.385 "is_configured": true, 00:18:06.385 "data_offset": 256, 00:18:06.385 "data_size": 7936 00:18:06.385 } 00:18:06.385 ] 00:18:06.385 }' 00:18:06.385 15:25:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:06.385 15:25:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:06.385 15:25:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:06.385 15:25:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:06.385 15:25:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@784 -- # killprocess 87561 00:18:06.385 15:25:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@954 -- # '[' -z 87561 ']' 00:18:06.385 15:25:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@958 -- # kill -0 87561 00:18:06.385 15:25:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@959 -- # uname 00:18:06.385 15:25:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:06.385 15:25:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87561 00:18:06.385 15:25:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:06.385 killing process with pid 87561 00:18:06.385 Received shutdown signal, test time was about 60.000000 seconds 00:18:06.385 00:18:06.385 Latency(us) 00:18:06.385 [2024-11-20T15:25:52.867Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:06.385 [2024-11-20T15:25:52.867Z] =================================================================================================================== 00:18:06.385 [2024-11-20T15:25:52.867Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:06.385 15:25:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:06.385 15:25:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87561' 00:18:06.385 15:25:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@973 -- # kill 87561 00:18:06.385 [2024-11-20 15:25:52.804831] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:06.385 15:25:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@978 -- # wait 87561 00:18:06.385 [2024-11-20 15:25:52.804964] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:06.385 [2024-11-20 15:25:52.805012] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:06.385 [2024-11-20 15:25:52.805026] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:18:06.954 [2024-11-20 15:25:53.145507] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:07.892 15:25:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@786 -- # return 0 00:18:07.892 00:18:07.892 real 0m19.811s 00:18:07.892 user 0m25.612s 00:18:07.892 sys 0m2.877s 00:18:07.892 15:25:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:07.892 ************************************ 00:18:07.892 END TEST raid_rebuild_test_sb_md_separate 00:18:07.892 ************************************ 00:18:07.892 15:25:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:07.892 15:25:54 bdev_raid -- bdev/bdev_raid.sh@1010 -- # base_malloc_params='-m 32 -i' 00:18:07.892 15:25:54 bdev_raid -- bdev/bdev_raid.sh@1011 -- # run_test raid_state_function_test_sb_md_interleaved raid_state_function_test raid1 2 true 00:18:07.893 15:25:54 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:18:07.893 15:25:54 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:07.893 15:25:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:07.893 ************************************ 00:18:07.893 START TEST raid_state_function_test_sb_md_interleaved 00:18:07.893 ************************************ 00:18:07.893 15:25:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:18:07.893 15:25:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:18:07.893 15:25:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:18:07.893 15:25:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:18:07.893 15:25:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:18:07.893 15:25:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:18:07.893 15:25:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:07.893 15:25:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:18:07.893 15:25:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:07.893 15:25:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:07.893 15:25:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:18:07.893 15:25:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:07.893 15:25:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:07.893 15:25:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:18:07.893 15:25:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:18:07.893 15:25:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:18:07.893 15:25:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # local strip_size 00:18:07.893 15:25:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:18:07.893 Process raid pid: 88255 00:18:07.893 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:07.893 15:25:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:18:07.893 15:25:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:18:07.893 15:25:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:18:07.893 15:25:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:18:07.893 15:25:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:18:07.893 15:25:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@229 -- # raid_pid=88255 00:18:07.893 15:25:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 88255' 00:18:07.893 15:25:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@231 -- # waitforlisten 88255 00:18:07.893 15:25:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 88255 ']' 00:18:07.893 15:25:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:07.893 15:25:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:07.893 15:25:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:07.893 15:25:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:07.893 15:25:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:07.893 15:25:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:18:08.152 [2024-11-20 15:25:54.435985] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:18:08.152 [2024-11-20 15:25:54.436306] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:08.152 [2024-11-20 15:25:54.619218] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:08.410 [2024-11-20 15:25:54.735832] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:08.670 [2024-11-20 15:25:54.937227] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:08.670 [2024-11-20 15:25:54.937491] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:08.928 15:25:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:08.928 15:25:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:18:08.928 15:25:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:08.928 15:25:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.928 15:25:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:08.928 [2024-11-20 15:25:55.283716] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:08.928 [2024-11-20 15:25:55.283905] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:08.928 [2024-11-20 15:25:55.284009] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:08.928 [2024-11-20 15:25:55.284054] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:08.928 15:25:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.928 15:25:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:08.928 15:25:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:08.928 15:25:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:08.928 15:25:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:08.928 15:25:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:08.928 15:25:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:08.928 15:25:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:08.928 15:25:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:08.928 15:25:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:08.928 15:25:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:08.928 15:25:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:08.928 15:25:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:08.928 15:25:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.928 15:25:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:08.928 15:25:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.928 15:25:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:08.928 "name": "Existed_Raid", 00:18:08.928 "uuid": "94101c72-b2af-49e0-b852-2ef63ec8440b", 00:18:08.928 "strip_size_kb": 0, 00:18:08.928 "state": "configuring", 00:18:08.928 "raid_level": "raid1", 00:18:08.928 "superblock": true, 00:18:08.928 "num_base_bdevs": 2, 00:18:08.928 "num_base_bdevs_discovered": 0, 00:18:08.928 "num_base_bdevs_operational": 2, 00:18:08.928 "base_bdevs_list": [ 00:18:08.928 { 00:18:08.928 "name": "BaseBdev1", 00:18:08.928 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:08.928 "is_configured": false, 00:18:08.928 "data_offset": 0, 00:18:08.928 "data_size": 0 00:18:08.928 }, 00:18:08.928 { 00:18:08.928 "name": "BaseBdev2", 00:18:08.928 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:08.928 "is_configured": false, 00:18:08.928 "data_offset": 0, 00:18:08.928 "data_size": 0 00:18:08.928 } 00:18:08.928 ] 00:18:08.928 }' 00:18:08.928 15:25:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:08.928 15:25:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:09.498 15:25:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:09.498 15:25:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.498 15:25:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:09.498 [2024-11-20 15:25:55.707047] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:09.498 [2024-11-20 15:25:55.707087] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:18:09.498 15:25:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.498 15:25:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:09.498 15:25:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.498 15:25:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:09.498 [2024-11-20 15:25:55.719042] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:09.498 [2024-11-20 15:25:55.719095] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:09.498 [2024-11-20 15:25:55.719105] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:09.498 [2024-11-20 15:25:55.719121] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:09.498 15:25:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.498 15:25:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1 00:18:09.498 15:25:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.498 15:25:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:09.498 [2024-11-20 15:25:55.771054] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:09.498 BaseBdev1 00:18:09.498 15:25:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.498 15:25:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:18:09.498 15:25:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:18:09.498 15:25:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:09.498 15:25:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # local i 00:18:09.498 15:25:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:09.498 15:25:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:09.498 15:25:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:09.498 15:25:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.498 15:25:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:09.498 15:25:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.498 15:25:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:09.498 15:25:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.498 15:25:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:09.498 [ 00:18:09.498 { 00:18:09.498 "name": "BaseBdev1", 00:18:09.498 "aliases": [ 00:18:09.498 "7b985f32-ecf2-4131-b670-178b460cacdd" 00:18:09.498 ], 00:18:09.498 "product_name": "Malloc disk", 00:18:09.498 "block_size": 4128, 00:18:09.498 "num_blocks": 8192, 00:18:09.498 "uuid": "7b985f32-ecf2-4131-b670-178b460cacdd", 00:18:09.498 "md_size": 32, 00:18:09.498 "md_interleave": true, 00:18:09.498 "dif_type": 0, 00:18:09.498 "assigned_rate_limits": { 00:18:09.498 "rw_ios_per_sec": 0, 00:18:09.498 "rw_mbytes_per_sec": 0, 00:18:09.498 "r_mbytes_per_sec": 0, 00:18:09.498 "w_mbytes_per_sec": 0 00:18:09.498 }, 00:18:09.498 "claimed": true, 00:18:09.498 "claim_type": "exclusive_write", 00:18:09.498 "zoned": false, 00:18:09.498 "supported_io_types": { 00:18:09.498 "read": true, 00:18:09.498 "write": true, 00:18:09.498 "unmap": true, 00:18:09.498 "flush": true, 00:18:09.498 "reset": true, 00:18:09.498 "nvme_admin": false, 00:18:09.498 "nvme_io": false, 00:18:09.498 "nvme_io_md": false, 00:18:09.498 "write_zeroes": true, 00:18:09.498 "zcopy": true, 00:18:09.498 "get_zone_info": false, 00:18:09.498 "zone_management": false, 00:18:09.498 "zone_append": false, 00:18:09.498 "compare": false, 00:18:09.498 "compare_and_write": false, 00:18:09.498 "abort": true, 00:18:09.498 "seek_hole": false, 00:18:09.498 "seek_data": false, 00:18:09.498 "copy": true, 00:18:09.498 "nvme_iov_md": false 00:18:09.498 }, 00:18:09.498 "memory_domains": [ 00:18:09.498 { 00:18:09.498 "dma_device_id": "system", 00:18:09.498 "dma_device_type": 1 00:18:09.498 }, 00:18:09.498 { 00:18:09.498 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:09.498 "dma_device_type": 2 00:18:09.498 } 00:18:09.498 ], 00:18:09.498 "driver_specific": {} 00:18:09.498 } 00:18:09.498 ] 00:18:09.498 15:25:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.498 15:25:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@911 -- # return 0 00:18:09.498 15:25:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:09.498 15:25:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:09.498 15:25:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:09.498 15:25:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:09.498 15:25:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:09.498 15:25:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:09.498 15:25:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:09.498 15:25:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:09.498 15:25:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:09.498 15:25:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:09.498 15:25:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:09.498 15:25:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.498 15:25:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:09.498 15:25:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:09.498 15:25:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.498 15:25:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:09.498 "name": "Existed_Raid", 00:18:09.498 "uuid": "1554e3e3-10bc-48a8-9445-591a75a9c661", 00:18:09.498 "strip_size_kb": 0, 00:18:09.498 "state": "configuring", 00:18:09.498 "raid_level": "raid1", 00:18:09.498 "superblock": true, 00:18:09.498 "num_base_bdevs": 2, 00:18:09.498 "num_base_bdevs_discovered": 1, 00:18:09.498 "num_base_bdevs_operational": 2, 00:18:09.498 "base_bdevs_list": [ 00:18:09.498 { 00:18:09.498 "name": "BaseBdev1", 00:18:09.498 "uuid": "7b985f32-ecf2-4131-b670-178b460cacdd", 00:18:09.498 "is_configured": true, 00:18:09.498 "data_offset": 256, 00:18:09.498 "data_size": 7936 00:18:09.498 }, 00:18:09.498 { 00:18:09.498 "name": "BaseBdev2", 00:18:09.498 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:09.498 "is_configured": false, 00:18:09.498 "data_offset": 0, 00:18:09.498 "data_size": 0 00:18:09.498 } 00:18:09.498 ] 00:18:09.498 }' 00:18:09.498 15:25:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:09.498 15:25:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:10.066 15:25:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:10.066 15:25:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.066 15:25:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:10.066 [2024-11-20 15:25:56.254871] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:10.066 [2024-11-20 15:25:56.255087] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:18:10.066 15:25:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.066 15:25:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:10.066 15:25:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.066 15:25:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:10.066 [2024-11-20 15:25:56.262942] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:10.066 [2024-11-20 15:25:56.265056] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:10.066 [2024-11-20 15:25:56.265236] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:10.066 15:25:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.066 15:25:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:18:10.066 15:25:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:10.066 15:25:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:10.066 15:25:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:10.066 15:25:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:10.066 15:25:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:10.066 15:25:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:10.066 15:25:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:10.066 15:25:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:10.066 15:25:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:10.066 15:25:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:10.066 15:25:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:10.066 15:25:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:10.066 15:25:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:10.066 15:25:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.066 15:25:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:10.066 15:25:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.066 15:25:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:10.066 "name": "Existed_Raid", 00:18:10.066 "uuid": "9088f3f5-488c-4ee2-a8f8-d358acb5d2ba", 00:18:10.066 "strip_size_kb": 0, 00:18:10.066 "state": "configuring", 00:18:10.066 "raid_level": "raid1", 00:18:10.066 "superblock": true, 00:18:10.066 "num_base_bdevs": 2, 00:18:10.066 "num_base_bdevs_discovered": 1, 00:18:10.066 "num_base_bdevs_operational": 2, 00:18:10.066 "base_bdevs_list": [ 00:18:10.066 { 00:18:10.066 "name": "BaseBdev1", 00:18:10.066 "uuid": "7b985f32-ecf2-4131-b670-178b460cacdd", 00:18:10.066 "is_configured": true, 00:18:10.066 "data_offset": 256, 00:18:10.066 "data_size": 7936 00:18:10.066 }, 00:18:10.066 { 00:18:10.066 "name": "BaseBdev2", 00:18:10.066 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:10.066 "is_configured": false, 00:18:10.066 "data_offset": 0, 00:18:10.066 "data_size": 0 00:18:10.066 } 00:18:10.066 ] 00:18:10.066 }' 00:18:10.066 15:25:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:10.066 15:25:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:10.326 15:25:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2 00:18:10.326 15:25:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.326 15:25:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:10.326 [2024-11-20 15:25:56.732691] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:10.326 [2024-11-20 15:25:56.733154] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:10.326 [2024-11-20 15:25:56.733177] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:10.326 [2024-11-20 15:25:56.733266] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:18:10.326 [2024-11-20 15:25:56.733343] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:10.326 [2024-11-20 15:25:56.733357] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:18:10.326 [2024-11-20 15:25:56.733420] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:10.326 BaseBdev2 00:18:10.326 15:25:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.326 15:25:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:18:10.326 15:25:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:18:10.326 15:25:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:10.326 15:25:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # local i 00:18:10.326 15:25:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:10.326 15:25:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:10.326 15:25:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:10.326 15:25:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.326 15:25:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:10.326 15:25:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.326 15:25:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:10.326 15:25:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.326 15:25:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:10.326 [ 00:18:10.326 { 00:18:10.326 "name": "BaseBdev2", 00:18:10.326 "aliases": [ 00:18:10.326 "4fcfb9f2-e8c4-4846-b7ef-a0f32b3b57b6" 00:18:10.326 ], 00:18:10.326 "product_name": "Malloc disk", 00:18:10.326 "block_size": 4128, 00:18:10.326 "num_blocks": 8192, 00:18:10.326 "uuid": "4fcfb9f2-e8c4-4846-b7ef-a0f32b3b57b6", 00:18:10.326 "md_size": 32, 00:18:10.326 "md_interleave": true, 00:18:10.326 "dif_type": 0, 00:18:10.326 "assigned_rate_limits": { 00:18:10.326 "rw_ios_per_sec": 0, 00:18:10.326 "rw_mbytes_per_sec": 0, 00:18:10.326 "r_mbytes_per_sec": 0, 00:18:10.326 "w_mbytes_per_sec": 0 00:18:10.326 }, 00:18:10.326 "claimed": true, 00:18:10.326 "claim_type": "exclusive_write", 00:18:10.326 "zoned": false, 00:18:10.326 "supported_io_types": { 00:18:10.326 "read": true, 00:18:10.326 "write": true, 00:18:10.326 "unmap": true, 00:18:10.326 "flush": true, 00:18:10.326 "reset": true, 00:18:10.326 "nvme_admin": false, 00:18:10.326 "nvme_io": false, 00:18:10.326 "nvme_io_md": false, 00:18:10.326 "write_zeroes": true, 00:18:10.326 "zcopy": true, 00:18:10.326 "get_zone_info": false, 00:18:10.326 "zone_management": false, 00:18:10.326 "zone_append": false, 00:18:10.326 "compare": false, 00:18:10.326 "compare_and_write": false, 00:18:10.326 "abort": true, 00:18:10.326 "seek_hole": false, 00:18:10.326 "seek_data": false, 00:18:10.326 "copy": true, 00:18:10.326 "nvme_iov_md": false 00:18:10.326 }, 00:18:10.326 "memory_domains": [ 00:18:10.326 { 00:18:10.326 "dma_device_id": "system", 00:18:10.326 "dma_device_type": 1 00:18:10.326 }, 00:18:10.326 { 00:18:10.326 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:10.326 "dma_device_type": 2 00:18:10.326 } 00:18:10.326 ], 00:18:10.326 "driver_specific": {} 00:18:10.326 } 00:18:10.326 ] 00:18:10.326 15:25:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.327 15:25:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@911 -- # return 0 00:18:10.327 15:25:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:18:10.327 15:25:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:10.327 15:25:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:18:10.327 15:25:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:10.327 15:25:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:10.327 15:25:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:10.327 15:25:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:10.327 15:25:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:10.327 15:25:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:10.327 15:25:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:10.327 15:25:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:10.327 15:25:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:10.327 15:25:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:10.327 15:25:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.327 15:25:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:10.327 15:25:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:10.586 15:25:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.586 15:25:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:10.586 "name": "Existed_Raid", 00:18:10.586 "uuid": "9088f3f5-488c-4ee2-a8f8-d358acb5d2ba", 00:18:10.586 "strip_size_kb": 0, 00:18:10.586 "state": "online", 00:18:10.586 "raid_level": "raid1", 00:18:10.586 "superblock": true, 00:18:10.586 "num_base_bdevs": 2, 00:18:10.586 "num_base_bdevs_discovered": 2, 00:18:10.586 "num_base_bdevs_operational": 2, 00:18:10.586 "base_bdevs_list": [ 00:18:10.586 { 00:18:10.586 "name": "BaseBdev1", 00:18:10.586 "uuid": "7b985f32-ecf2-4131-b670-178b460cacdd", 00:18:10.586 "is_configured": true, 00:18:10.586 "data_offset": 256, 00:18:10.586 "data_size": 7936 00:18:10.586 }, 00:18:10.586 { 00:18:10.586 "name": "BaseBdev2", 00:18:10.586 "uuid": "4fcfb9f2-e8c4-4846-b7ef-a0f32b3b57b6", 00:18:10.586 "is_configured": true, 00:18:10.586 "data_offset": 256, 00:18:10.586 "data_size": 7936 00:18:10.586 } 00:18:10.586 ] 00:18:10.586 }' 00:18:10.586 15:25:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:10.586 15:25:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:10.846 15:25:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:18:10.846 15:25:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:18:10.846 15:25:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:10.846 15:25:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:10.846 15:25:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:18:10.846 15:25:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:10.846 15:25:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:10.846 15:25:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:18:10.846 15:25:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.846 15:25:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:10.846 [2024-11-20 15:25:57.256295] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:10.846 15:25:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.846 15:25:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:10.846 "name": "Existed_Raid", 00:18:10.846 "aliases": [ 00:18:10.846 "9088f3f5-488c-4ee2-a8f8-d358acb5d2ba" 00:18:10.846 ], 00:18:10.846 "product_name": "Raid Volume", 00:18:10.846 "block_size": 4128, 00:18:10.846 "num_blocks": 7936, 00:18:10.846 "uuid": "9088f3f5-488c-4ee2-a8f8-d358acb5d2ba", 00:18:10.846 "md_size": 32, 00:18:10.846 "md_interleave": true, 00:18:10.846 "dif_type": 0, 00:18:10.846 "assigned_rate_limits": { 00:18:10.846 "rw_ios_per_sec": 0, 00:18:10.846 "rw_mbytes_per_sec": 0, 00:18:10.846 "r_mbytes_per_sec": 0, 00:18:10.846 "w_mbytes_per_sec": 0 00:18:10.846 }, 00:18:10.846 "claimed": false, 00:18:10.846 "zoned": false, 00:18:10.846 "supported_io_types": { 00:18:10.846 "read": true, 00:18:10.846 "write": true, 00:18:10.846 "unmap": false, 00:18:10.846 "flush": false, 00:18:10.846 "reset": true, 00:18:10.846 "nvme_admin": false, 00:18:10.846 "nvme_io": false, 00:18:10.846 "nvme_io_md": false, 00:18:10.846 "write_zeroes": true, 00:18:10.846 "zcopy": false, 00:18:10.846 "get_zone_info": false, 00:18:10.846 "zone_management": false, 00:18:10.846 "zone_append": false, 00:18:10.846 "compare": false, 00:18:10.846 "compare_and_write": false, 00:18:10.846 "abort": false, 00:18:10.846 "seek_hole": false, 00:18:10.846 "seek_data": false, 00:18:10.846 "copy": false, 00:18:10.846 "nvme_iov_md": false 00:18:10.846 }, 00:18:10.846 "memory_domains": [ 00:18:10.846 { 00:18:10.846 "dma_device_id": "system", 00:18:10.846 "dma_device_type": 1 00:18:10.846 }, 00:18:10.846 { 00:18:10.846 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:10.846 "dma_device_type": 2 00:18:10.846 }, 00:18:10.846 { 00:18:10.846 "dma_device_id": "system", 00:18:10.846 "dma_device_type": 1 00:18:10.846 }, 00:18:10.846 { 00:18:10.846 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:10.846 "dma_device_type": 2 00:18:10.846 } 00:18:10.846 ], 00:18:10.846 "driver_specific": { 00:18:10.846 "raid": { 00:18:10.846 "uuid": "9088f3f5-488c-4ee2-a8f8-d358acb5d2ba", 00:18:10.846 "strip_size_kb": 0, 00:18:10.846 "state": "online", 00:18:10.846 "raid_level": "raid1", 00:18:10.846 "superblock": true, 00:18:10.846 "num_base_bdevs": 2, 00:18:10.846 "num_base_bdevs_discovered": 2, 00:18:10.846 "num_base_bdevs_operational": 2, 00:18:10.846 "base_bdevs_list": [ 00:18:10.846 { 00:18:10.846 "name": "BaseBdev1", 00:18:10.846 "uuid": "7b985f32-ecf2-4131-b670-178b460cacdd", 00:18:10.846 "is_configured": true, 00:18:10.846 "data_offset": 256, 00:18:10.846 "data_size": 7936 00:18:10.846 }, 00:18:10.846 { 00:18:10.846 "name": "BaseBdev2", 00:18:10.846 "uuid": "4fcfb9f2-e8c4-4846-b7ef-a0f32b3b57b6", 00:18:10.846 "is_configured": true, 00:18:10.846 "data_offset": 256, 00:18:10.846 "data_size": 7936 00:18:10.846 } 00:18:10.846 ] 00:18:10.846 } 00:18:10.846 } 00:18:10.846 }' 00:18:10.846 15:25:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:11.113 15:25:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:18:11.113 BaseBdev2' 00:18:11.113 15:25:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:11.113 15:25:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:18:11.113 15:25:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:11.113 15:25:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:18:11.113 15:25:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:11.113 15:25:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.113 15:25:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:11.113 15:25:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.113 15:25:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:18:11.113 15:25:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:18:11.113 15:25:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:11.113 15:25:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:18:11.113 15:25:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:11.113 15:25:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.113 15:25:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:11.113 15:25:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.113 15:25:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:18:11.113 15:25:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:18:11.113 15:25:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:18:11.113 15:25:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.113 15:25:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:11.114 [2024-11-20 15:25:57.455805] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:11.114 15:25:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.114 15:25:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@260 -- # local expected_state 00:18:11.114 15:25:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:18:11.114 15:25:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:11.114 15:25:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:18:11.114 15:25:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:18:11.114 15:25:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:18:11.114 15:25:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:11.114 15:25:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:11.114 15:25:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:11.114 15:25:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:11.114 15:25:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:11.114 15:25:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:11.114 15:25:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:11.114 15:25:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:11.114 15:25:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:11.114 15:25:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:11.114 15:25:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:11.114 15:25:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.114 15:25:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:11.114 15:25:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.373 15:25:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:11.373 "name": "Existed_Raid", 00:18:11.374 "uuid": "9088f3f5-488c-4ee2-a8f8-d358acb5d2ba", 00:18:11.374 "strip_size_kb": 0, 00:18:11.374 "state": "online", 00:18:11.374 "raid_level": "raid1", 00:18:11.374 "superblock": true, 00:18:11.374 "num_base_bdevs": 2, 00:18:11.374 "num_base_bdevs_discovered": 1, 00:18:11.374 "num_base_bdevs_operational": 1, 00:18:11.374 "base_bdevs_list": [ 00:18:11.374 { 00:18:11.374 "name": null, 00:18:11.374 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:11.374 "is_configured": false, 00:18:11.374 "data_offset": 0, 00:18:11.374 "data_size": 7936 00:18:11.374 }, 00:18:11.374 { 00:18:11.374 "name": "BaseBdev2", 00:18:11.374 "uuid": "4fcfb9f2-e8c4-4846-b7ef-a0f32b3b57b6", 00:18:11.374 "is_configured": true, 00:18:11.374 "data_offset": 256, 00:18:11.374 "data_size": 7936 00:18:11.374 } 00:18:11.374 ] 00:18:11.374 }' 00:18:11.374 15:25:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:11.374 15:25:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:11.633 15:25:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:18:11.633 15:25:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:11.633 15:25:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:11.633 15:25:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.633 15:25:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:11.633 15:25:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:18:11.633 15:25:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.633 15:25:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:18:11.633 15:25:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:11.633 15:25:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:18:11.633 15:25:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.633 15:25:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:11.633 [2024-11-20 15:25:57.958980] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:11.633 [2024-11-20 15:25:57.959255] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:11.633 [2024-11-20 15:25:58.057155] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:11.633 [2024-11-20 15:25:58.057216] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:11.633 [2024-11-20 15:25:58.057232] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:18:11.633 15:25:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.633 15:25:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:18:11.633 15:25:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:11.633 15:25:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:18:11.633 15:25:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:11.633 15:25:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.633 15:25:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:11.633 15:25:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.633 15:25:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:18:11.633 15:25:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:18:11.633 15:25:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:18:11.633 15:25:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@326 -- # killprocess 88255 00:18:11.634 15:25:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 88255 ']' 00:18:11.634 15:25:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 88255 00:18:11.634 15:25:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:18:11.893 15:25:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:11.893 15:25:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88255 00:18:11.893 killing process with pid 88255 00:18:11.893 15:25:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:11.893 15:25:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:11.893 15:25:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88255' 00:18:11.893 15:25:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@973 -- # kill 88255 00:18:11.893 [2024-11-20 15:25:58.152801] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:11.893 15:25:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@978 -- # wait 88255 00:18:11.893 [2024-11-20 15:25:58.170164] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:12.866 15:25:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@328 -- # return 0 00:18:12.866 00:18:12.866 real 0m4.963s 00:18:12.866 user 0m7.071s 00:18:12.866 sys 0m0.915s 00:18:12.866 ************************************ 00:18:12.866 END TEST raid_state_function_test_sb_md_interleaved 00:18:12.866 ************************************ 00:18:12.866 15:25:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:12.866 15:25:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:13.126 15:25:59 bdev_raid -- bdev/bdev_raid.sh@1012 -- # run_test raid_superblock_test_md_interleaved raid_superblock_test raid1 2 00:18:13.126 15:25:59 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:18:13.126 15:25:59 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:13.126 15:25:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:13.126 ************************************ 00:18:13.126 START TEST raid_superblock_test_md_interleaved 00:18:13.126 ************************************ 00:18:13.126 15:25:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:18:13.126 15:25:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:18:13.126 15:25:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:18:13.126 15:25:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:18:13.126 15:25:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:18:13.126 15:25:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:18:13.126 15:25:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:18:13.126 15:25:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:18:13.126 15:25:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:18:13.126 15:25:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:18:13.126 15:25:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@399 -- # local strip_size 00:18:13.126 15:25:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:18:13.126 15:25:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:18:13.126 15:25:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:18:13.126 15:25:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:18:13.126 15:25:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:18:13.126 15:25:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:18:13.126 15:25:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@412 -- # raid_pid=88496 00:18:13.126 15:25:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@413 -- # waitforlisten 88496 00:18:13.126 15:25:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 88496 ']' 00:18:13.126 15:25:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:13.126 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:13.126 15:25:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:13.126 15:25:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:13.126 15:25:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:13.126 15:25:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:13.126 [2024-11-20 15:25:59.482800] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:18:13.126 [2024-11-20 15:25:59.482987] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88496 ] 00:18:13.385 [2024-11-20 15:25:59.666332] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:13.385 [2024-11-20 15:25:59.787953] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:13.645 [2024-11-20 15:26:00.000055] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:13.645 [2024-11-20 15:26:00.000101] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:13.904 15:26:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:13.904 15:26:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:18:13.904 15:26:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:18:13.904 15:26:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:13.904 15:26:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:18:13.904 15:26:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:18:13.904 15:26:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:18:13.904 15:26:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:13.904 15:26:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:13.904 15:26:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:13.904 15:26:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc1 00:18:13.904 15:26:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.904 15:26:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:13.904 malloc1 00:18:13.904 15:26:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.904 15:26:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:13.904 15:26:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.904 15:26:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:13.904 [2024-11-20 15:26:00.382964] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:13.904 [2024-11-20 15:26:00.383181] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:13.904 [2024-11-20 15:26:00.383245] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:13.904 [2024-11-20 15:26:00.383339] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:14.163 [2024-11-20 15:26:00.385577] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:14.163 [2024-11-20 15:26:00.385736] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:14.163 pt1 00:18:14.163 15:26:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.163 15:26:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:14.163 15:26:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:14.163 15:26:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:18:14.163 15:26:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:18:14.164 15:26:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:18:14.164 15:26:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:14.164 15:26:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:14.164 15:26:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:14.164 15:26:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc2 00:18:14.164 15:26:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.164 15:26:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:14.164 malloc2 00:18:14.164 15:26:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.164 15:26:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:14.164 15:26:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.164 15:26:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:14.164 [2024-11-20 15:26:00.441560] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:14.164 [2024-11-20 15:26:00.441782] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:14.164 [2024-11-20 15:26:00.441848] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:14.164 [2024-11-20 15:26:00.441942] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:14.164 [2024-11-20 15:26:00.444242] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:14.164 [2024-11-20 15:26:00.444388] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:14.164 pt2 00:18:14.164 15:26:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.164 15:26:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:14.164 15:26:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:14.164 15:26:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:18:14.164 15:26:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.164 15:26:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:14.164 [2024-11-20 15:26:00.453601] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:14.164 [2024-11-20 15:26:00.455905] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:14.164 [2024-11-20 15:26:00.456121] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:14.164 [2024-11-20 15:26:00.456136] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:14.164 [2024-11-20 15:26:00.456239] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:18:14.164 [2024-11-20 15:26:00.456320] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:14.164 [2024-11-20 15:26:00.456334] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:14.164 [2024-11-20 15:26:00.456422] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:14.164 15:26:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.164 15:26:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:14.164 15:26:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:14.164 15:26:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:14.164 15:26:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:14.164 15:26:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:14.164 15:26:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:14.164 15:26:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:14.164 15:26:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:14.164 15:26:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:14.164 15:26:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:14.164 15:26:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:14.164 15:26:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:14.164 15:26:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.164 15:26:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:14.164 15:26:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.164 15:26:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:14.164 "name": "raid_bdev1", 00:18:14.164 "uuid": "06519721-0f17-49d5-a2dd-652eaae1eb20", 00:18:14.164 "strip_size_kb": 0, 00:18:14.164 "state": "online", 00:18:14.164 "raid_level": "raid1", 00:18:14.164 "superblock": true, 00:18:14.164 "num_base_bdevs": 2, 00:18:14.164 "num_base_bdevs_discovered": 2, 00:18:14.164 "num_base_bdevs_operational": 2, 00:18:14.164 "base_bdevs_list": [ 00:18:14.164 { 00:18:14.164 "name": "pt1", 00:18:14.164 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:14.164 "is_configured": true, 00:18:14.164 "data_offset": 256, 00:18:14.164 "data_size": 7936 00:18:14.164 }, 00:18:14.164 { 00:18:14.164 "name": "pt2", 00:18:14.164 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:14.164 "is_configured": true, 00:18:14.164 "data_offset": 256, 00:18:14.164 "data_size": 7936 00:18:14.164 } 00:18:14.164 ] 00:18:14.164 }' 00:18:14.164 15:26:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:14.164 15:26:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:14.424 15:26:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:18:14.424 15:26:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:18:14.424 15:26:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:14.424 15:26:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:14.424 15:26:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:18:14.424 15:26:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:14.424 15:26:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:14.424 15:26:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:14.424 15:26:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.424 15:26:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:14.424 [2024-11-20 15:26:00.901239] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:14.683 15:26:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.683 15:26:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:14.683 "name": "raid_bdev1", 00:18:14.683 "aliases": [ 00:18:14.683 "06519721-0f17-49d5-a2dd-652eaae1eb20" 00:18:14.683 ], 00:18:14.683 "product_name": "Raid Volume", 00:18:14.683 "block_size": 4128, 00:18:14.683 "num_blocks": 7936, 00:18:14.683 "uuid": "06519721-0f17-49d5-a2dd-652eaae1eb20", 00:18:14.683 "md_size": 32, 00:18:14.683 "md_interleave": true, 00:18:14.683 "dif_type": 0, 00:18:14.683 "assigned_rate_limits": { 00:18:14.683 "rw_ios_per_sec": 0, 00:18:14.683 "rw_mbytes_per_sec": 0, 00:18:14.683 "r_mbytes_per_sec": 0, 00:18:14.683 "w_mbytes_per_sec": 0 00:18:14.683 }, 00:18:14.683 "claimed": false, 00:18:14.683 "zoned": false, 00:18:14.683 "supported_io_types": { 00:18:14.683 "read": true, 00:18:14.683 "write": true, 00:18:14.683 "unmap": false, 00:18:14.683 "flush": false, 00:18:14.683 "reset": true, 00:18:14.683 "nvme_admin": false, 00:18:14.683 "nvme_io": false, 00:18:14.683 "nvme_io_md": false, 00:18:14.683 "write_zeroes": true, 00:18:14.683 "zcopy": false, 00:18:14.683 "get_zone_info": false, 00:18:14.683 "zone_management": false, 00:18:14.683 "zone_append": false, 00:18:14.683 "compare": false, 00:18:14.683 "compare_and_write": false, 00:18:14.683 "abort": false, 00:18:14.683 "seek_hole": false, 00:18:14.683 "seek_data": false, 00:18:14.683 "copy": false, 00:18:14.683 "nvme_iov_md": false 00:18:14.683 }, 00:18:14.683 "memory_domains": [ 00:18:14.683 { 00:18:14.683 "dma_device_id": "system", 00:18:14.683 "dma_device_type": 1 00:18:14.683 }, 00:18:14.683 { 00:18:14.683 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:14.683 "dma_device_type": 2 00:18:14.683 }, 00:18:14.683 { 00:18:14.683 "dma_device_id": "system", 00:18:14.683 "dma_device_type": 1 00:18:14.683 }, 00:18:14.683 { 00:18:14.683 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:14.683 "dma_device_type": 2 00:18:14.683 } 00:18:14.683 ], 00:18:14.683 "driver_specific": { 00:18:14.683 "raid": { 00:18:14.683 "uuid": "06519721-0f17-49d5-a2dd-652eaae1eb20", 00:18:14.683 "strip_size_kb": 0, 00:18:14.683 "state": "online", 00:18:14.683 "raid_level": "raid1", 00:18:14.683 "superblock": true, 00:18:14.683 "num_base_bdevs": 2, 00:18:14.683 "num_base_bdevs_discovered": 2, 00:18:14.683 "num_base_bdevs_operational": 2, 00:18:14.683 "base_bdevs_list": [ 00:18:14.683 { 00:18:14.683 "name": "pt1", 00:18:14.683 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:14.683 "is_configured": true, 00:18:14.683 "data_offset": 256, 00:18:14.683 "data_size": 7936 00:18:14.683 }, 00:18:14.683 { 00:18:14.683 "name": "pt2", 00:18:14.683 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:14.683 "is_configured": true, 00:18:14.683 "data_offset": 256, 00:18:14.683 "data_size": 7936 00:18:14.683 } 00:18:14.683 ] 00:18:14.683 } 00:18:14.683 } 00:18:14.683 }' 00:18:14.683 15:26:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:14.683 15:26:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:18:14.683 pt2' 00:18:14.683 15:26:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:14.683 15:26:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:18:14.683 15:26:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:14.683 15:26:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:14.683 15:26:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:18:14.683 15:26:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.683 15:26:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:14.683 15:26:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.683 15:26:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:18:14.683 15:26:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:18:14.683 15:26:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:14.683 15:26:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:18:14.683 15:26:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.683 15:26:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:14.683 15:26:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:14.683 15:26:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.683 15:26:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:18:14.683 15:26:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:18:14.683 15:26:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:14.683 15:26:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:18:14.683 15:26:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.683 15:26:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:14.683 [2024-11-20 15:26:01.124924] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:14.683 15:26:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.943 15:26:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=06519721-0f17-49d5-a2dd-652eaae1eb20 00:18:14.943 15:26:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@436 -- # '[' -z 06519721-0f17-49d5-a2dd-652eaae1eb20 ']' 00:18:14.943 15:26:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:14.943 15:26:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.943 15:26:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:14.943 [2024-11-20 15:26:01.168539] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:14.943 [2024-11-20 15:26:01.168719] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:14.943 [2024-11-20 15:26:01.168915] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:14.943 [2024-11-20 15:26:01.169069] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:14.943 [2024-11-20 15:26:01.169175] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:14.943 15:26:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.943 15:26:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:14.943 15:26:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.943 15:26:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:18:14.943 15:26:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:14.943 15:26:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.943 15:26:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:18:14.943 15:26:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:18:14.943 15:26:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:14.943 15:26:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:18:14.943 15:26:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.943 15:26:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:14.943 15:26:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.943 15:26:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:14.943 15:26:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:18:14.943 15:26:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.943 15:26:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:14.943 15:26:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.943 15:26:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:18:14.943 15:26:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.943 15:26:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:14.943 15:26:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:18:14.943 15:26:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.943 15:26:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:18:14.943 15:26:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:18:14.943 15:26:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@652 -- # local es=0 00:18:14.943 15:26:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:18:14.943 15:26:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:14.943 15:26:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:14.943 15:26:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:14.943 15:26:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:14.943 15:26:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:18:14.943 15:26:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.943 15:26:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:14.943 [2024-11-20 15:26:01.308375] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:18:14.943 [2024-11-20 15:26:01.310732] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:18:14.943 [2024-11-20 15:26:01.310825] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:18:14.943 [2024-11-20 15:26:01.310893] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:18:14.943 [2024-11-20 15:26:01.310912] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:14.943 [2024-11-20 15:26:01.310926] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:18:14.943 request: 00:18:14.943 { 00:18:14.943 "name": "raid_bdev1", 00:18:14.943 "raid_level": "raid1", 00:18:14.943 "base_bdevs": [ 00:18:14.943 "malloc1", 00:18:14.943 "malloc2" 00:18:14.943 ], 00:18:14.943 "superblock": false, 00:18:14.943 "method": "bdev_raid_create", 00:18:14.943 "req_id": 1 00:18:14.943 } 00:18:14.943 Got JSON-RPC error response 00:18:14.943 response: 00:18:14.943 { 00:18:14.944 "code": -17, 00:18:14.944 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:18:14.944 } 00:18:14.944 15:26:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:14.944 15:26:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@655 -- # es=1 00:18:14.944 15:26:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:14.944 15:26:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:14.944 15:26:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:14.944 15:26:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:18:14.944 15:26:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:14.944 15:26:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.944 15:26:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:14.944 15:26:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.944 15:26:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:18:14.944 15:26:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:18:14.944 15:26:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:14.944 15:26:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.944 15:26:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:14.944 [2024-11-20 15:26:01.368278] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:14.944 [2024-11-20 15:26:01.368357] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:14.944 [2024-11-20 15:26:01.368380] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:18:14.944 [2024-11-20 15:26:01.368395] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:14.944 [2024-11-20 15:26:01.370824] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:14.944 [2024-11-20 15:26:01.371008] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:14.944 [2024-11-20 15:26:01.371095] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:14.944 [2024-11-20 15:26:01.371173] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:14.944 pt1 00:18:14.944 15:26:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.944 15:26:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:18:14.944 15:26:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:14.944 15:26:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:14.944 15:26:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:14.944 15:26:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:14.944 15:26:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:14.944 15:26:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:14.944 15:26:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:14.944 15:26:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:14.944 15:26:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:14.944 15:26:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:14.944 15:26:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:14.944 15:26:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.944 15:26:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:14.944 15:26:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.944 15:26:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:14.944 "name": "raid_bdev1", 00:18:14.944 "uuid": "06519721-0f17-49d5-a2dd-652eaae1eb20", 00:18:14.944 "strip_size_kb": 0, 00:18:14.944 "state": "configuring", 00:18:14.944 "raid_level": "raid1", 00:18:14.944 "superblock": true, 00:18:14.944 "num_base_bdevs": 2, 00:18:14.944 "num_base_bdevs_discovered": 1, 00:18:14.944 "num_base_bdevs_operational": 2, 00:18:14.944 "base_bdevs_list": [ 00:18:14.944 { 00:18:14.944 "name": "pt1", 00:18:14.944 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:14.944 "is_configured": true, 00:18:14.944 "data_offset": 256, 00:18:14.944 "data_size": 7936 00:18:14.944 }, 00:18:14.944 { 00:18:14.944 "name": null, 00:18:14.944 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:14.944 "is_configured": false, 00:18:14.944 "data_offset": 256, 00:18:14.944 "data_size": 7936 00:18:14.944 } 00:18:14.944 ] 00:18:14.944 }' 00:18:14.944 15:26:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:14.944 15:26:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:15.514 15:26:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:18:15.514 15:26:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:18:15.514 15:26:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:15.514 15:26:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:15.514 15:26:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.514 15:26:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:15.514 [2024-11-20 15:26:01.767716] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:15.514 [2024-11-20 15:26:01.767802] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:15.514 [2024-11-20 15:26:01.767827] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:18:15.514 [2024-11-20 15:26:01.767842] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:15.514 [2024-11-20 15:26:01.768019] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:15.514 [2024-11-20 15:26:01.768039] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:15.514 [2024-11-20 15:26:01.768095] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:15.514 [2024-11-20 15:26:01.768119] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:15.514 [2024-11-20 15:26:01.768202] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:15.514 [2024-11-20 15:26:01.768215] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:15.514 [2024-11-20 15:26:01.768282] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:18:15.514 [2024-11-20 15:26:01.768343] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:15.514 [2024-11-20 15:26:01.768352] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:18:15.514 [2024-11-20 15:26:01.768417] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:15.514 pt2 00:18:15.514 15:26:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.514 15:26:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:18:15.514 15:26:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:15.514 15:26:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:15.514 15:26:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:15.514 15:26:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:15.514 15:26:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:15.514 15:26:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:15.514 15:26:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:15.514 15:26:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:15.514 15:26:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:15.514 15:26:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:15.514 15:26:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:15.514 15:26:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:15.514 15:26:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.514 15:26:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:15.514 15:26:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:15.514 15:26:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.514 15:26:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:15.514 "name": "raid_bdev1", 00:18:15.514 "uuid": "06519721-0f17-49d5-a2dd-652eaae1eb20", 00:18:15.514 "strip_size_kb": 0, 00:18:15.514 "state": "online", 00:18:15.514 "raid_level": "raid1", 00:18:15.514 "superblock": true, 00:18:15.514 "num_base_bdevs": 2, 00:18:15.514 "num_base_bdevs_discovered": 2, 00:18:15.514 "num_base_bdevs_operational": 2, 00:18:15.514 "base_bdevs_list": [ 00:18:15.514 { 00:18:15.514 "name": "pt1", 00:18:15.514 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:15.514 "is_configured": true, 00:18:15.514 "data_offset": 256, 00:18:15.514 "data_size": 7936 00:18:15.514 }, 00:18:15.514 { 00:18:15.514 "name": "pt2", 00:18:15.514 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:15.514 "is_configured": true, 00:18:15.514 "data_offset": 256, 00:18:15.514 "data_size": 7936 00:18:15.514 } 00:18:15.514 ] 00:18:15.514 }' 00:18:15.514 15:26:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:15.514 15:26:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:15.773 15:26:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:18:15.773 15:26:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:18:15.773 15:26:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:15.773 15:26:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:15.773 15:26:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:18:15.773 15:26:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:15.773 15:26:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:15.773 15:26:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:15.773 15:26:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.773 15:26:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:15.773 [2024-11-20 15:26:02.195365] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:15.773 15:26:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.773 15:26:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:15.773 "name": "raid_bdev1", 00:18:15.773 "aliases": [ 00:18:15.773 "06519721-0f17-49d5-a2dd-652eaae1eb20" 00:18:15.773 ], 00:18:15.773 "product_name": "Raid Volume", 00:18:15.773 "block_size": 4128, 00:18:15.773 "num_blocks": 7936, 00:18:15.773 "uuid": "06519721-0f17-49d5-a2dd-652eaae1eb20", 00:18:15.773 "md_size": 32, 00:18:15.773 "md_interleave": true, 00:18:15.773 "dif_type": 0, 00:18:15.773 "assigned_rate_limits": { 00:18:15.773 "rw_ios_per_sec": 0, 00:18:15.773 "rw_mbytes_per_sec": 0, 00:18:15.773 "r_mbytes_per_sec": 0, 00:18:15.773 "w_mbytes_per_sec": 0 00:18:15.773 }, 00:18:15.773 "claimed": false, 00:18:15.773 "zoned": false, 00:18:15.773 "supported_io_types": { 00:18:15.773 "read": true, 00:18:15.773 "write": true, 00:18:15.773 "unmap": false, 00:18:15.773 "flush": false, 00:18:15.773 "reset": true, 00:18:15.773 "nvme_admin": false, 00:18:15.773 "nvme_io": false, 00:18:15.773 "nvme_io_md": false, 00:18:15.773 "write_zeroes": true, 00:18:15.773 "zcopy": false, 00:18:15.773 "get_zone_info": false, 00:18:15.773 "zone_management": false, 00:18:15.773 "zone_append": false, 00:18:15.773 "compare": false, 00:18:15.773 "compare_and_write": false, 00:18:15.773 "abort": false, 00:18:15.773 "seek_hole": false, 00:18:15.773 "seek_data": false, 00:18:15.773 "copy": false, 00:18:15.773 "nvme_iov_md": false 00:18:15.773 }, 00:18:15.773 "memory_domains": [ 00:18:15.773 { 00:18:15.773 "dma_device_id": "system", 00:18:15.773 "dma_device_type": 1 00:18:15.773 }, 00:18:15.773 { 00:18:15.773 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:15.773 "dma_device_type": 2 00:18:15.773 }, 00:18:15.773 { 00:18:15.773 "dma_device_id": "system", 00:18:15.773 "dma_device_type": 1 00:18:15.773 }, 00:18:15.773 { 00:18:15.773 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:15.773 "dma_device_type": 2 00:18:15.773 } 00:18:15.773 ], 00:18:15.773 "driver_specific": { 00:18:15.773 "raid": { 00:18:15.773 "uuid": "06519721-0f17-49d5-a2dd-652eaae1eb20", 00:18:15.773 "strip_size_kb": 0, 00:18:15.773 "state": "online", 00:18:15.773 "raid_level": "raid1", 00:18:15.773 "superblock": true, 00:18:15.773 "num_base_bdevs": 2, 00:18:15.773 "num_base_bdevs_discovered": 2, 00:18:15.773 "num_base_bdevs_operational": 2, 00:18:15.773 "base_bdevs_list": [ 00:18:15.773 { 00:18:15.773 "name": "pt1", 00:18:15.773 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:15.773 "is_configured": true, 00:18:15.773 "data_offset": 256, 00:18:15.773 "data_size": 7936 00:18:15.773 }, 00:18:15.773 { 00:18:15.773 "name": "pt2", 00:18:15.773 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:15.773 "is_configured": true, 00:18:15.773 "data_offset": 256, 00:18:15.773 "data_size": 7936 00:18:15.773 } 00:18:15.773 ] 00:18:15.773 } 00:18:15.773 } 00:18:15.773 }' 00:18:15.773 15:26:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:16.033 15:26:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:18:16.033 pt2' 00:18:16.033 15:26:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:16.033 15:26:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:18:16.033 15:26:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:16.033 15:26:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:16.033 15:26:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:18:16.033 15:26:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.033 15:26:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:16.033 15:26:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.033 15:26:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:18:16.033 15:26:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:18:16.033 15:26:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:16.033 15:26:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:18:16.033 15:26:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.033 15:26:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:16.033 15:26:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:16.033 15:26:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.033 15:26:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:18:16.033 15:26:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:18:16.033 15:26:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:18:16.033 15:26:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:16.033 15:26:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.033 15:26:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:16.033 [2024-11-20 15:26:02.427162] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:16.033 15:26:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.033 15:26:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # '[' 06519721-0f17-49d5-a2dd-652eaae1eb20 '!=' 06519721-0f17-49d5-a2dd-652eaae1eb20 ']' 00:18:16.033 15:26:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:18:16.033 15:26:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:16.033 15:26:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:18:16.033 15:26:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:18:16.033 15:26:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.033 15:26:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:16.033 [2024-11-20 15:26:02.466930] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:18:16.033 15:26:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.033 15:26:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:16.033 15:26:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:16.033 15:26:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:16.033 15:26:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:16.033 15:26:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:16.033 15:26:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:16.033 15:26:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:16.033 15:26:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:16.033 15:26:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:16.033 15:26:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:16.033 15:26:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:16.033 15:26:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:16.033 15:26:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.033 15:26:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:16.033 15:26:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.292 15:26:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:16.292 "name": "raid_bdev1", 00:18:16.292 "uuid": "06519721-0f17-49d5-a2dd-652eaae1eb20", 00:18:16.292 "strip_size_kb": 0, 00:18:16.292 "state": "online", 00:18:16.292 "raid_level": "raid1", 00:18:16.292 "superblock": true, 00:18:16.292 "num_base_bdevs": 2, 00:18:16.292 "num_base_bdevs_discovered": 1, 00:18:16.292 "num_base_bdevs_operational": 1, 00:18:16.292 "base_bdevs_list": [ 00:18:16.292 { 00:18:16.292 "name": null, 00:18:16.292 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:16.292 "is_configured": false, 00:18:16.292 "data_offset": 0, 00:18:16.292 "data_size": 7936 00:18:16.292 }, 00:18:16.292 { 00:18:16.292 "name": "pt2", 00:18:16.292 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:16.292 "is_configured": true, 00:18:16.292 "data_offset": 256, 00:18:16.292 "data_size": 7936 00:18:16.292 } 00:18:16.292 ] 00:18:16.292 }' 00:18:16.292 15:26:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:16.292 15:26:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:16.553 15:26:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:16.553 15:26:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.553 15:26:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:16.553 [2024-11-20 15:26:02.870877] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:16.553 [2024-11-20 15:26:02.870912] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:16.553 [2024-11-20 15:26:02.870995] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:16.553 [2024-11-20 15:26:02.871044] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:16.553 [2024-11-20 15:26:02.871059] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:18:16.553 15:26:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.553 15:26:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:18:16.553 15:26:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:16.553 15:26:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.553 15:26:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:16.553 15:26:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.553 15:26:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:18:16.553 15:26:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:18:16.553 15:26:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:18:16.553 15:26:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:16.553 15:26:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:18:16.553 15:26:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.553 15:26:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:16.553 15:26:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.553 15:26:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:18:16.553 15:26:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:16.553 15:26:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:18:16.553 15:26:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:18:16.553 15:26:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@519 -- # i=1 00:18:16.553 15:26:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:16.553 15:26:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.553 15:26:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:16.553 [2024-11-20 15:26:02.930898] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:16.553 [2024-11-20 15:26:02.931187] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:16.553 [2024-11-20 15:26:02.931218] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:18:16.553 [2024-11-20 15:26:02.931233] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:16.553 [2024-11-20 15:26:02.933668] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:16.553 [2024-11-20 15:26:02.933826] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:16.553 [2024-11-20 15:26:02.933909] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:16.553 [2024-11-20 15:26:02.933969] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:16.553 [2024-11-20 15:26:02.934046] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:18:16.553 [2024-11-20 15:26:02.934061] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:16.553 [2024-11-20 15:26:02.934164] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:18:16.553 [2024-11-20 15:26:02.934226] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:18:16.553 [2024-11-20 15:26:02.934235] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:18:16.553 [2024-11-20 15:26:02.934302] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:16.553 pt2 00:18:16.553 15:26:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.554 15:26:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:16.554 15:26:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:16.554 15:26:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:16.554 15:26:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:16.554 15:26:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:16.554 15:26:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:16.554 15:26:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:16.554 15:26:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:16.554 15:26:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:16.554 15:26:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:16.554 15:26:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:16.554 15:26:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:16.554 15:26:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.554 15:26:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:16.554 15:26:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.554 15:26:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:16.554 "name": "raid_bdev1", 00:18:16.554 "uuid": "06519721-0f17-49d5-a2dd-652eaae1eb20", 00:18:16.554 "strip_size_kb": 0, 00:18:16.554 "state": "online", 00:18:16.554 "raid_level": "raid1", 00:18:16.554 "superblock": true, 00:18:16.554 "num_base_bdevs": 2, 00:18:16.554 "num_base_bdevs_discovered": 1, 00:18:16.554 "num_base_bdevs_operational": 1, 00:18:16.554 "base_bdevs_list": [ 00:18:16.554 { 00:18:16.554 "name": null, 00:18:16.554 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:16.554 "is_configured": false, 00:18:16.554 "data_offset": 256, 00:18:16.554 "data_size": 7936 00:18:16.554 }, 00:18:16.554 { 00:18:16.554 "name": "pt2", 00:18:16.554 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:16.554 "is_configured": true, 00:18:16.554 "data_offset": 256, 00:18:16.554 "data_size": 7936 00:18:16.554 } 00:18:16.554 ] 00:18:16.554 }' 00:18:16.554 15:26:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:16.554 15:26:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:17.123 15:26:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:17.123 15:26:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.123 15:26:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:17.123 [2024-11-20 15:26:03.370871] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:17.123 [2024-11-20 15:26:03.370907] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:17.123 [2024-11-20 15:26:03.370987] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:17.123 [2024-11-20 15:26:03.371043] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:17.123 [2024-11-20 15:26:03.371055] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:18:17.123 15:26:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.123 15:26:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:17.123 15:26:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.123 15:26:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:17.123 15:26:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:18:17.123 15:26:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.123 15:26:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:18:17.124 15:26:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:18:17.124 15:26:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:18:17.124 15:26:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:17.124 15:26:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.124 15:26:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:17.124 [2024-11-20 15:26:03.426908] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:17.124 [2024-11-20 15:26:03.426988] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:17.124 [2024-11-20 15:26:03.427013] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:18:17.124 [2024-11-20 15:26:03.427025] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:17.124 [2024-11-20 15:26:03.429471] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:17.124 [2024-11-20 15:26:03.429522] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:17.124 [2024-11-20 15:26:03.429596] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:17.124 [2024-11-20 15:26:03.429678] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:17.124 [2024-11-20 15:26:03.429787] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:18:17.124 [2024-11-20 15:26:03.429800] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:17.124 [2024-11-20 15:26:03.429822] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:18:17.124 [2024-11-20 15:26:03.429893] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:17.124 [2024-11-20 15:26:03.429974] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:18:17.124 [2024-11-20 15:26:03.429984] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:17.124 [2024-11-20 15:26:03.430065] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:17.124 [2024-11-20 15:26:03.430121] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:18:17.124 [2024-11-20 15:26:03.430134] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:18:17.124 [2024-11-20 15:26:03.430208] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:17.124 pt1 00:18:17.124 15:26:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.124 15:26:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:18:17.124 15:26:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:17.124 15:26:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:17.124 15:26:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:17.124 15:26:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:17.124 15:26:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:17.124 15:26:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:17.124 15:26:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:17.124 15:26:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:17.124 15:26:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:17.124 15:26:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:17.124 15:26:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:17.124 15:26:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.124 15:26:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:17.124 15:26:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:17.124 15:26:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.124 15:26:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:17.124 "name": "raid_bdev1", 00:18:17.124 "uuid": "06519721-0f17-49d5-a2dd-652eaae1eb20", 00:18:17.124 "strip_size_kb": 0, 00:18:17.124 "state": "online", 00:18:17.124 "raid_level": "raid1", 00:18:17.124 "superblock": true, 00:18:17.124 "num_base_bdevs": 2, 00:18:17.124 "num_base_bdevs_discovered": 1, 00:18:17.124 "num_base_bdevs_operational": 1, 00:18:17.124 "base_bdevs_list": [ 00:18:17.124 { 00:18:17.124 "name": null, 00:18:17.124 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:17.124 "is_configured": false, 00:18:17.124 "data_offset": 256, 00:18:17.124 "data_size": 7936 00:18:17.124 }, 00:18:17.124 { 00:18:17.124 "name": "pt2", 00:18:17.124 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:17.124 "is_configured": true, 00:18:17.124 "data_offset": 256, 00:18:17.124 "data_size": 7936 00:18:17.124 } 00:18:17.124 ] 00:18:17.124 }' 00:18:17.124 15:26:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:17.124 15:26:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:17.693 15:26:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:18:17.693 15:26:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:18:17.693 15:26:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.693 15:26:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:17.693 15:26:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.693 15:26:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:18:17.693 15:26:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:17.693 15:26:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.693 15:26:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:18:17.693 15:26:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:17.693 [2024-11-20 15:26:03.915084] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:17.693 15:26:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.694 15:26:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # '[' 06519721-0f17-49d5-a2dd-652eaae1eb20 '!=' 06519721-0f17-49d5-a2dd-652eaae1eb20 ']' 00:18:17.694 15:26:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@563 -- # killprocess 88496 00:18:17.694 15:26:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 88496 ']' 00:18:17.694 15:26:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 88496 00:18:17.694 15:26:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:18:17.694 15:26:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:17.694 15:26:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88496 00:18:17.694 15:26:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:17.694 15:26:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:17.694 15:26:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88496' 00:18:17.694 killing process with pid 88496 00:18:17.694 15:26:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@973 -- # kill 88496 00:18:17.694 [2024-11-20 15:26:04.003320] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:17.694 15:26:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@978 -- # wait 88496 00:18:17.694 [2024-11-20 15:26:04.003565] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:17.694 [2024-11-20 15:26:04.003621] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:17.694 [2024-11-20 15:26:04.003640] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:18:17.954 [2024-11-20 15:26:04.213780] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:18.898 15:26:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@565 -- # return 0 00:18:18.898 00:18:18.898 real 0m5.980s 00:18:18.898 user 0m8.964s 00:18:18.898 sys 0m1.264s 00:18:18.898 15:26:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:18.898 ************************************ 00:18:18.898 END TEST raid_superblock_test_md_interleaved 00:18:18.898 ************************************ 00:18:18.898 15:26:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:19.157 15:26:05 bdev_raid -- bdev/bdev_raid.sh@1013 -- # run_test raid_rebuild_test_sb_md_interleaved raid_rebuild_test raid1 2 true false false 00:18:19.157 15:26:05 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:18:19.157 15:26:05 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:19.157 15:26:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:19.157 ************************************ 00:18:19.157 START TEST raid_rebuild_test_sb_md_interleaved 00:18:19.157 ************************************ 00:18:19.157 15:26:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false false 00:18:19.157 15:26:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:18:19.157 15:26:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:18:19.157 15:26:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:18:19.157 15:26:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:18:19.157 15:26:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # local verify=false 00:18:19.157 15:26:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:18:19.157 15:26:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:19.157 15:26:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:18:19.157 15:26:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:19.157 15:26:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:19.157 15:26:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:18:19.157 15:26:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:19.157 15:26:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:19.157 15:26:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:18:19.157 15:26:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:18:19.157 15:26:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:18:19.157 15:26:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # local strip_size 00:18:19.157 15:26:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@577 -- # local create_arg 00:18:19.157 15:26:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:18:19.157 15:26:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@579 -- # local data_offset 00:18:19.157 15:26:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:18:19.157 15:26:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:18:19.157 15:26:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:18:19.157 15:26:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:18:19.157 15:26:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@597 -- # raid_pid=88819 00:18:19.157 15:26:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:18:19.157 15:26:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@598 -- # waitforlisten 88819 00:18:19.157 15:26:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 88819 ']' 00:18:19.157 15:26:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:19.157 15:26:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:19.157 15:26:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:19.157 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:19.157 15:26:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:19.157 15:26:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:19.157 [2024-11-20 15:26:05.544933] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:18:19.157 I/O size of 3145728 is greater than zero copy threshold (65536). 00:18:19.157 Zero copy mechanism will not be used. 00:18:19.157 [2024-11-20 15:26:05.545287] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88819 ] 00:18:19.415 [2024-11-20 15:26:05.728544] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:19.415 [2024-11-20 15:26:05.852472] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:19.674 [2024-11-20 15:26:06.068029] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:19.674 [2024-11-20 15:26:06.068103] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:19.933 15:26:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:19.933 15:26:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:18:19.933 15:26:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:19.933 15:26:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1_malloc 00:18:19.933 15:26:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.933 15:26:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:20.195 BaseBdev1_malloc 00:18:20.195 15:26:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.195 15:26:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:20.195 15:26:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.195 15:26:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:20.195 [2024-11-20 15:26:06.459554] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:20.195 [2024-11-20 15:26:06.459646] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:20.195 [2024-11-20 15:26:06.459690] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:20.195 [2024-11-20 15:26:06.459705] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:20.195 [2024-11-20 15:26:06.461909] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:20.195 [2024-11-20 15:26:06.462095] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:20.195 BaseBdev1 00:18:20.195 15:26:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.195 15:26:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:20.195 15:26:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2_malloc 00:18:20.195 15:26:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.195 15:26:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:20.195 BaseBdev2_malloc 00:18:20.195 15:26:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.195 15:26:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:18:20.195 15:26:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.195 15:26:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:20.195 [2024-11-20 15:26:06.516706] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:18:20.195 [2024-11-20 15:26:06.516975] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:20.195 [2024-11-20 15:26:06.517008] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:20.195 [2024-11-20 15:26:06.517026] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:20.195 [2024-11-20 15:26:06.519201] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:20.195 [2024-11-20 15:26:06.519248] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:20.195 BaseBdev2 00:18:20.195 15:26:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.195 15:26:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b spare_malloc 00:18:20.195 15:26:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.195 15:26:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:20.195 spare_malloc 00:18:20.195 15:26:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.195 15:26:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:18:20.195 15:26:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.195 15:26:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:20.195 spare_delay 00:18:20.195 15:26:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.195 15:26:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:20.195 15:26:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.195 15:26:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:20.195 [2024-11-20 15:26:06.598888] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:20.195 [2024-11-20 15:26:06.599167] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:20.195 [2024-11-20 15:26:06.599204] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:18:20.195 [2024-11-20 15:26:06.599219] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:20.195 [2024-11-20 15:26:06.601434] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:20.195 [2024-11-20 15:26:06.601484] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:20.195 spare 00:18:20.195 15:26:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.195 15:26:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:18:20.195 15:26:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.195 15:26:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:20.195 [2024-11-20 15:26:06.610920] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:20.195 [2024-11-20 15:26:06.613108] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:20.195 [2024-11-20 15:26:06.613461] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:20.195 [2024-11-20 15:26:06.613485] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:20.195 [2024-11-20 15:26:06.613590] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:18:20.195 [2024-11-20 15:26:06.613684] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:20.195 [2024-11-20 15:26:06.613694] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:20.195 [2024-11-20 15:26:06.613789] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:20.195 15:26:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.195 15:26:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:20.195 15:26:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:20.195 15:26:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:20.195 15:26:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:20.195 15:26:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:20.195 15:26:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:20.195 15:26:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:20.195 15:26:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:20.195 15:26:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:20.195 15:26:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:20.195 15:26:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:20.195 15:26:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.195 15:26:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:20.195 15:26:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:20.195 15:26:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.195 15:26:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:20.195 "name": "raid_bdev1", 00:18:20.195 "uuid": "99e4261f-18a6-4344-b2cb-f2a2ef4befa8", 00:18:20.195 "strip_size_kb": 0, 00:18:20.195 "state": "online", 00:18:20.195 "raid_level": "raid1", 00:18:20.195 "superblock": true, 00:18:20.195 "num_base_bdevs": 2, 00:18:20.195 "num_base_bdevs_discovered": 2, 00:18:20.195 "num_base_bdevs_operational": 2, 00:18:20.195 "base_bdevs_list": [ 00:18:20.195 { 00:18:20.195 "name": "BaseBdev1", 00:18:20.195 "uuid": "a7506a01-be1e-5780-a41b-4a0f5bfb4f37", 00:18:20.195 "is_configured": true, 00:18:20.195 "data_offset": 256, 00:18:20.195 "data_size": 7936 00:18:20.195 }, 00:18:20.195 { 00:18:20.195 "name": "BaseBdev2", 00:18:20.195 "uuid": "d864965c-24ad-5eb9-9fd3-6811c64950c5", 00:18:20.195 "is_configured": true, 00:18:20.195 "data_offset": 256, 00:18:20.195 "data_size": 7936 00:18:20.195 } 00:18:20.195 ] 00:18:20.195 }' 00:18:20.195 15:26:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:20.195 15:26:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:20.763 15:26:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:20.763 15:26:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.763 15:26:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:20.763 15:26:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:18:20.763 [2024-11-20 15:26:07.023188] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:20.763 15:26:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.763 15:26:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:18:20.763 15:26:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:18:20.763 15:26:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:20.763 15:26:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.763 15:26:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:20.763 15:26:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.763 15:26:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:18:20.763 15:26:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:18:20.763 15:26:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@624 -- # '[' false = true ']' 00:18:20.763 15:26:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:18:20.764 15:26:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.764 15:26:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:20.764 [2024-11-20 15:26:07.111097] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:20.764 15:26:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.764 15:26:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:20.764 15:26:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:20.764 15:26:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:20.764 15:26:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:20.764 15:26:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:20.764 15:26:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:20.764 15:26:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:20.764 15:26:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:20.764 15:26:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:20.764 15:26:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:20.764 15:26:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:20.764 15:26:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:20.764 15:26:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.764 15:26:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:20.764 15:26:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.764 15:26:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:20.764 "name": "raid_bdev1", 00:18:20.764 "uuid": "99e4261f-18a6-4344-b2cb-f2a2ef4befa8", 00:18:20.764 "strip_size_kb": 0, 00:18:20.764 "state": "online", 00:18:20.764 "raid_level": "raid1", 00:18:20.764 "superblock": true, 00:18:20.764 "num_base_bdevs": 2, 00:18:20.764 "num_base_bdevs_discovered": 1, 00:18:20.764 "num_base_bdevs_operational": 1, 00:18:20.764 "base_bdevs_list": [ 00:18:20.764 { 00:18:20.764 "name": null, 00:18:20.764 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:20.764 "is_configured": false, 00:18:20.764 "data_offset": 0, 00:18:20.764 "data_size": 7936 00:18:20.764 }, 00:18:20.764 { 00:18:20.764 "name": "BaseBdev2", 00:18:20.764 "uuid": "d864965c-24ad-5eb9-9fd3-6811c64950c5", 00:18:20.764 "is_configured": true, 00:18:20.764 "data_offset": 256, 00:18:20.764 "data_size": 7936 00:18:20.764 } 00:18:20.764 ] 00:18:20.764 }' 00:18:20.764 15:26:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:20.764 15:26:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:21.333 15:26:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:21.333 15:26:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.333 15:26:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:21.333 [2024-11-20 15:26:07.522902] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:21.333 [2024-11-20 15:26:07.540265] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:18:21.333 15:26:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.333 15:26:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@647 -- # sleep 1 00:18:21.333 [2024-11-20 15:26:07.542700] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:22.271 15:26:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:22.271 15:26:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:22.271 15:26:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:22.271 15:26:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:22.271 15:26:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:22.271 15:26:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:22.271 15:26:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:22.271 15:26:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.271 15:26:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:22.271 15:26:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.271 15:26:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:22.271 "name": "raid_bdev1", 00:18:22.271 "uuid": "99e4261f-18a6-4344-b2cb-f2a2ef4befa8", 00:18:22.271 "strip_size_kb": 0, 00:18:22.271 "state": "online", 00:18:22.271 "raid_level": "raid1", 00:18:22.271 "superblock": true, 00:18:22.271 "num_base_bdevs": 2, 00:18:22.271 "num_base_bdevs_discovered": 2, 00:18:22.271 "num_base_bdevs_operational": 2, 00:18:22.271 "process": { 00:18:22.271 "type": "rebuild", 00:18:22.271 "target": "spare", 00:18:22.271 "progress": { 00:18:22.271 "blocks": 2560, 00:18:22.271 "percent": 32 00:18:22.271 } 00:18:22.271 }, 00:18:22.271 "base_bdevs_list": [ 00:18:22.271 { 00:18:22.271 "name": "spare", 00:18:22.271 "uuid": "fee088f2-bd54-5080-9cde-44fc6b226514", 00:18:22.271 "is_configured": true, 00:18:22.271 "data_offset": 256, 00:18:22.271 "data_size": 7936 00:18:22.271 }, 00:18:22.271 { 00:18:22.271 "name": "BaseBdev2", 00:18:22.271 "uuid": "d864965c-24ad-5eb9-9fd3-6811c64950c5", 00:18:22.271 "is_configured": true, 00:18:22.271 "data_offset": 256, 00:18:22.271 "data_size": 7936 00:18:22.271 } 00:18:22.271 ] 00:18:22.271 }' 00:18:22.271 15:26:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:22.271 15:26:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:22.271 15:26:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:22.271 15:26:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:22.271 15:26:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:22.271 15:26:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.271 15:26:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:22.271 [2024-11-20 15:26:08.694901] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:22.271 [2024-11-20 15:26:08.748515] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:22.271 [2024-11-20 15:26:08.748615] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:22.271 [2024-11-20 15:26:08.748632] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:22.271 [2024-11-20 15:26:08.748648] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:22.531 15:26:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.531 15:26:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:22.531 15:26:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:22.531 15:26:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:22.531 15:26:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:22.531 15:26:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:22.531 15:26:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:22.531 15:26:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:22.531 15:26:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:22.531 15:26:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:22.531 15:26:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:22.531 15:26:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:22.531 15:26:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:22.531 15:26:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.531 15:26:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:22.531 15:26:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.531 15:26:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:22.531 "name": "raid_bdev1", 00:18:22.531 "uuid": "99e4261f-18a6-4344-b2cb-f2a2ef4befa8", 00:18:22.531 "strip_size_kb": 0, 00:18:22.531 "state": "online", 00:18:22.531 "raid_level": "raid1", 00:18:22.531 "superblock": true, 00:18:22.531 "num_base_bdevs": 2, 00:18:22.531 "num_base_bdevs_discovered": 1, 00:18:22.531 "num_base_bdevs_operational": 1, 00:18:22.531 "base_bdevs_list": [ 00:18:22.531 { 00:18:22.531 "name": null, 00:18:22.531 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:22.531 "is_configured": false, 00:18:22.531 "data_offset": 0, 00:18:22.531 "data_size": 7936 00:18:22.531 }, 00:18:22.531 { 00:18:22.531 "name": "BaseBdev2", 00:18:22.531 "uuid": "d864965c-24ad-5eb9-9fd3-6811c64950c5", 00:18:22.531 "is_configured": true, 00:18:22.531 "data_offset": 256, 00:18:22.531 "data_size": 7936 00:18:22.531 } 00:18:22.531 ] 00:18:22.531 }' 00:18:22.531 15:26:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:22.531 15:26:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:22.790 15:26:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:22.790 15:26:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:22.790 15:26:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:22.790 15:26:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:22.790 15:26:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:22.790 15:26:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:22.790 15:26:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:22.790 15:26:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.790 15:26:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:23.049 15:26:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.049 15:26:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:23.049 "name": "raid_bdev1", 00:18:23.049 "uuid": "99e4261f-18a6-4344-b2cb-f2a2ef4befa8", 00:18:23.049 "strip_size_kb": 0, 00:18:23.049 "state": "online", 00:18:23.049 "raid_level": "raid1", 00:18:23.049 "superblock": true, 00:18:23.049 "num_base_bdevs": 2, 00:18:23.049 "num_base_bdevs_discovered": 1, 00:18:23.049 "num_base_bdevs_operational": 1, 00:18:23.049 "base_bdevs_list": [ 00:18:23.049 { 00:18:23.049 "name": null, 00:18:23.049 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:23.049 "is_configured": false, 00:18:23.049 "data_offset": 0, 00:18:23.049 "data_size": 7936 00:18:23.049 }, 00:18:23.049 { 00:18:23.049 "name": "BaseBdev2", 00:18:23.049 "uuid": "d864965c-24ad-5eb9-9fd3-6811c64950c5", 00:18:23.049 "is_configured": true, 00:18:23.049 "data_offset": 256, 00:18:23.049 "data_size": 7936 00:18:23.049 } 00:18:23.049 ] 00:18:23.049 }' 00:18:23.049 15:26:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:23.049 15:26:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:23.049 15:26:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:23.049 15:26:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:23.049 15:26:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:23.049 15:26:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.049 15:26:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:23.049 [2024-11-20 15:26:09.412853] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:23.049 [2024-11-20 15:26:09.429223] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:23.049 15:26:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.049 15:26:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@663 -- # sleep 1 00:18:23.049 [2024-11-20 15:26:09.431448] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:23.987 15:26:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:23.987 15:26:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:23.987 15:26:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:23.987 15:26:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:23.987 15:26:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:23.987 15:26:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:23.987 15:26:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:23.987 15:26:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.987 15:26:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:24.247 15:26:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.247 15:26:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:24.247 "name": "raid_bdev1", 00:18:24.247 "uuid": "99e4261f-18a6-4344-b2cb-f2a2ef4befa8", 00:18:24.247 "strip_size_kb": 0, 00:18:24.247 "state": "online", 00:18:24.247 "raid_level": "raid1", 00:18:24.247 "superblock": true, 00:18:24.247 "num_base_bdevs": 2, 00:18:24.247 "num_base_bdevs_discovered": 2, 00:18:24.247 "num_base_bdevs_operational": 2, 00:18:24.247 "process": { 00:18:24.247 "type": "rebuild", 00:18:24.247 "target": "spare", 00:18:24.247 "progress": { 00:18:24.247 "blocks": 2560, 00:18:24.247 "percent": 32 00:18:24.247 } 00:18:24.247 }, 00:18:24.247 "base_bdevs_list": [ 00:18:24.247 { 00:18:24.247 "name": "spare", 00:18:24.247 "uuid": "fee088f2-bd54-5080-9cde-44fc6b226514", 00:18:24.247 "is_configured": true, 00:18:24.247 "data_offset": 256, 00:18:24.247 "data_size": 7936 00:18:24.247 }, 00:18:24.247 { 00:18:24.247 "name": "BaseBdev2", 00:18:24.247 "uuid": "d864965c-24ad-5eb9-9fd3-6811c64950c5", 00:18:24.247 "is_configured": true, 00:18:24.247 "data_offset": 256, 00:18:24.247 "data_size": 7936 00:18:24.247 } 00:18:24.247 ] 00:18:24.247 }' 00:18:24.247 15:26:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:24.247 15:26:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:24.247 15:26:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:24.247 15:26:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:24.247 15:26:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:18:24.247 15:26:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:18:24.247 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:18:24.247 15:26:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:18:24.247 15:26:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:18:24.247 15:26:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:18:24.247 15:26:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # local timeout=734 00:18:24.247 15:26:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:24.247 15:26:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:24.247 15:26:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:24.247 15:26:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:24.247 15:26:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:24.247 15:26:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:24.247 15:26:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:24.247 15:26:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.247 15:26:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:24.247 15:26:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:24.247 15:26:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.247 15:26:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:24.247 "name": "raid_bdev1", 00:18:24.247 "uuid": "99e4261f-18a6-4344-b2cb-f2a2ef4befa8", 00:18:24.247 "strip_size_kb": 0, 00:18:24.247 "state": "online", 00:18:24.247 "raid_level": "raid1", 00:18:24.247 "superblock": true, 00:18:24.247 "num_base_bdevs": 2, 00:18:24.247 "num_base_bdevs_discovered": 2, 00:18:24.247 "num_base_bdevs_operational": 2, 00:18:24.247 "process": { 00:18:24.247 "type": "rebuild", 00:18:24.247 "target": "spare", 00:18:24.247 "progress": { 00:18:24.247 "blocks": 2816, 00:18:24.247 "percent": 35 00:18:24.247 } 00:18:24.247 }, 00:18:24.247 "base_bdevs_list": [ 00:18:24.247 { 00:18:24.247 "name": "spare", 00:18:24.247 "uuid": "fee088f2-bd54-5080-9cde-44fc6b226514", 00:18:24.247 "is_configured": true, 00:18:24.247 "data_offset": 256, 00:18:24.247 "data_size": 7936 00:18:24.247 }, 00:18:24.247 { 00:18:24.247 "name": "BaseBdev2", 00:18:24.247 "uuid": "d864965c-24ad-5eb9-9fd3-6811c64950c5", 00:18:24.247 "is_configured": true, 00:18:24.247 "data_offset": 256, 00:18:24.247 "data_size": 7936 00:18:24.247 } 00:18:24.247 ] 00:18:24.247 }' 00:18:24.247 15:26:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:24.247 15:26:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:24.247 15:26:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:24.247 15:26:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:24.247 15:26:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:25.662 15:26:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:25.662 15:26:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:25.662 15:26:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:25.662 15:26:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:25.662 15:26:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:25.662 15:26:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:25.662 15:26:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:25.662 15:26:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:25.662 15:26:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.662 15:26:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:25.662 15:26:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.662 15:26:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:25.662 "name": "raid_bdev1", 00:18:25.662 "uuid": "99e4261f-18a6-4344-b2cb-f2a2ef4befa8", 00:18:25.662 "strip_size_kb": 0, 00:18:25.662 "state": "online", 00:18:25.662 "raid_level": "raid1", 00:18:25.662 "superblock": true, 00:18:25.662 "num_base_bdevs": 2, 00:18:25.662 "num_base_bdevs_discovered": 2, 00:18:25.662 "num_base_bdevs_operational": 2, 00:18:25.662 "process": { 00:18:25.662 "type": "rebuild", 00:18:25.662 "target": "spare", 00:18:25.662 "progress": { 00:18:25.662 "blocks": 5632, 00:18:25.662 "percent": 70 00:18:25.662 } 00:18:25.662 }, 00:18:25.662 "base_bdevs_list": [ 00:18:25.662 { 00:18:25.662 "name": "spare", 00:18:25.662 "uuid": "fee088f2-bd54-5080-9cde-44fc6b226514", 00:18:25.662 "is_configured": true, 00:18:25.662 "data_offset": 256, 00:18:25.662 "data_size": 7936 00:18:25.662 }, 00:18:25.662 { 00:18:25.662 "name": "BaseBdev2", 00:18:25.662 "uuid": "d864965c-24ad-5eb9-9fd3-6811c64950c5", 00:18:25.662 "is_configured": true, 00:18:25.662 "data_offset": 256, 00:18:25.662 "data_size": 7936 00:18:25.662 } 00:18:25.662 ] 00:18:25.662 }' 00:18:25.662 15:26:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:25.662 15:26:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:25.662 15:26:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:25.662 15:26:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:25.662 15:26:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:26.231 [2024-11-20 15:26:12.545955] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:18:26.231 [2024-11-20 15:26:12.546280] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:18:26.232 [2024-11-20 15:26:12.546419] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:26.491 15:26:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:26.491 15:26:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:26.491 15:26:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:26.491 15:26:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:26.491 15:26:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:26.491 15:26:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:26.491 15:26:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:26.491 15:26:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.491 15:26:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:26.491 15:26:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:26.491 15:26:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.491 15:26:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:26.491 "name": "raid_bdev1", 00:18:26.491 "uuid": "99e4261f-18a6-4344-b2cb-f2a2ef4befa8", 00:18:26.491 "strip_size_kb": 0, 00:18:26.491 "state": "online", 00:18:26.491 "raid_level": "raid1", 00:18:26.491 "superblock": true, 00:18:26.491 "num_base_bdevs": 2, 00:18:26.491 "num_base_bdevs_discovered": 2, 00:18:26.491 "num_base_bdevs_operational": 2, 00:18:26.491 "base_bdevs_list": [ 00:18:26.491 { 00:18:26.491 "name": "spare", 00:18:26.491 "uuid": "fee088f2-bd54-5080-9cde-44fc6b226514", 00:18:26.491 "is_configured": true, 00:18:26.491 "data_offset": 256, 00:18:26.491 "data_size": 7936 00:18:26.492 }, 00:18:26.492 { 00:18:26.492 "name": "BaseBdev2", 00:18:26.492 "uuid": "d864965c-24ad-5eb9-9fd3-6811c64950c5", 00:18:26.492 "is_configured": true, 00:18:26.492 "data_offset": 256, 00:18:26.492 "data_size": 7936 00:18:26.492 } 00:18:26.492 ] 00:18:26.492 }' 00:18:26.492 15:26:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:26.492 15:26:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:18:26.492 15:26:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:26.492 15:26:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:18:26.492 15:26:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@709 -- # break 00:18:26.492 15:26:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:26.492 15:26:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:26.492 15:26:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:26.492 15:26:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:26.492 15:26:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:26.492 15:26:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:26.492 15:26:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:26.492 15:26:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.492 15:26:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:26.752 15:26:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.752 15:26:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:26.752 "name": "raid_bdev1", 00:18:26.752 "uuid": "99e4261f-18a6-4344-b2cb-f2a2ef4befa8", 00:18:26.752 "strip_size_kb": 0, 00:18:26.752 "state": "online", 00:18:26.752 "raid_level": "raid1", 00:18:26.752 "superblock": true, 00:18:26.752 "num_base_bdevs": 2, 00:18:26.752 "num_base_bdevs_discovered": 2, 00:18:26.752 "num_base_bdevs_operational": 2, 00:18:26.752 "base_bdevs_list": [ 00:18:26.752 { 00:18:26.752 "name": "spare", 00:18:26.752 "uuid": "fee088f2-bd54-5080-9cde-44fc6b226514", 00:18:26.752 "is_configured": true, 00:18:26.752 "data_offset": 256, 00:18:26.752 "data_size": 7936 00:18:26.752 }, 00:18:26.752 { 00:18:26.752 "name": "BaseBdev2", 00:18:26.752 "uuid": "d864965c-24ad-5eb9-9fd3-6811c64950c5", 00:18:26.752 "is_configured": true, 00:18:26.752 "data_offset": 256, 00:18:26.752 "data_size": 7936 00:18:26.752 } 00:18:26.752 ] 00:18:26.752 }' 00:18:26.752 15:26:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:26.752 15:26:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:26.752 15:26:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:26.752 15:26:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:26.752 15:26:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:26.752 15:26:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:26.752 15:26:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:26.752 15:26:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:26.752 15:26:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:26.752 15:26:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:26.752 15:26:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:26.752 15:26:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:26.752 15:26:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:26.752 15:26:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:26.752 15:26:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:26.752 15:26:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:26.752 15:26:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.752 15:26:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:26.752 15:26:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.752 15:26:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:26.752 "name": "raid_bdev1", 00:18:26.752 "uuid": "99e4261f-18a6-4344-b2cb-f2a2ef4befa8", 00:18:26.752 "strip_size_kb": 0, 00:18:26.752 "state": "online", 00:18:26.752 "raid_level": "raid1", 00:18:26.752 "superblock": true, 00:18:26.752 "num_base_bdevs": 2, 00:18:26.752 "num_base_bdevs_discovered": 2, 00:18:26.752 "num_base_bdevs_operational": 2, 00:18:26.752 "base_bdevs_list": [ 00:18:26.752 { 00:18:26.752 "name": "spare", 00:18:26.752 "uuid": "fee088f2-bd54-5080-9cde-44fc6b226514", 00:18:26.752 "is_configured": true, 00:18:26.752 "data_offset": 256, 00:18:26.752 "data_size": 7936 00:18:26.752 }, 00:18:26.752 { 00:18:26.752 "name": "BaseBdev2", 00:18:26.752 "uuid": "d864965c-24ad-5eb9-9fd3-6811c64950c5", 00:18:26.752 "is_configured": true, 00:18:26.752 "data_offset": 256, 00:18:26.752 "data_size": 7936 00:18:26.752 } 00:18:26.752 ] 00:18:26.752 }' 00:18:26.752 15:26:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:26.752 15:26:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:27.012 15:26:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:27.012 15:26:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.012 15:26:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:27.012 [2024-11-20 15:26:13.446856] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:27.012 [2024-11-20 15:26:13.446896] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:27.012 [2024-11-20 15:26:13.446997] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:27.012 [2024-11-20 15:26:13.447068] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:27.012 [2024-11-20 15:26:13.447082] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:27.012 15:26:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.012 15:26:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # jq length 00:18:27.012 15:26:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:27.012 15:26:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.012 15:26:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:27.012 15:26:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.012 15:26:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:18:27.012 15:26:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:18:27.012 15:26:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:18:27.012 15:26:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:18:27.012 15:26:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.012 15:26:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:27.272 15:26:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.272 15:26:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:27.272 15:26:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.272 15:26:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:27.272 [2024-11-20 15:26:13.502850] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:27.272 [2024-11-20 15:26:13.502929] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:27.272 [2024-11-20 15:26:13.502955] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:18:27.272 [2024-11-20 15:26:13.502967] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:27.272 [2024-11-20 15:26:13.505284] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:27.272 [2024-11-20 15:26:13.505337] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:27.272 [2024-11-20 15:26:13.505428] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:27.272 [2024-11-20 15:26:13.505482] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:27.272 [2024-11-20 15:26:13.505610] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:27.272 spare 00:18:27.272 15:26:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.272 15:26:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:18:27.272 15:26:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.272 15:26:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:27.272 [2024-11-20 15:26:13.605545] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:18:27.272 [2024-11-20 15:26:13.605610] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:27.272 [2024-11-20 15:26:13.605771] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:18:27.272 [2024-11-20 15:26:13.605887] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:18:27.272 [2024-11-20 15:26:13.605900] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:18:27.272 [2024-11-20 15:26:13.606008] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:27.272 15:26:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.272 15:26:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:27.272 15:26:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:27.272 15:26:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:27.272 15:26:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:27.272 15:26:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:27.272 15:26:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:27.272 15:26:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:27.272 15:26:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:27.272 15:26:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:27.272 15:26:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:27.272 15:26:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:27.272 15:26:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.272 15:26:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:27.272 15:26:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:27.272 15:26:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.272 15:26:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:27.272 "name": "raid_bdev1", 00:18:27.272 "uuid": "99e4261f-18a6-4344-b2cb-f2a2ef4befa8", 00:18:27.272 "strip_size_kb": 0, 00:18:27.272 "state": "online", 00:18:27.272 "raid_level": "raid1", 00:18:27.272 "superblock": true, 00:18:27.272 "num_base_bdevs": 2, 00:18:27.272 "num_base_bdevs_discovered": 2, 00:18:27.272 "num_base_bdevs_operational": 2, 00:18:27.272 "base_bdevs_list": [ 00:18:27.272 { 00:18:27.272 "name": "spare", 00:18:27.272 "uuid": "fee088f2-bd54-5080-9cde-44fc6b226514", 00:18:27.272 "is_configured": true, 00:18:27.272 "data_offset": 256, 00:18:27.272 "data_size": 7936 00:18:27.272 }, 00:18:27.272 { 00:18:27.272 "name": "BaseBdev2", 00:18:27.272 "uuid": "d864965c-24ad-5eb9-9fd3-6811c64950c5", 00:18:27.272 "is_configured": true, 00:18:27.272 "data_offset": 256, 00:18:27.272 "data_size": 7936 00:18:27.272 } 00:18:27.273 ] 00:18:27.273 }' 00:18:27.273 15:26:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:27.273 15:26:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:27.843 15:26:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:27.843 15:26:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:27.843 15:26:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:27.843 15:26:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:27.843 15:26:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:27.843 15:26:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:27.843 15:26:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.843 15:26:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:27.843 15:26:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:27.843 15:26:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.843 15:26:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:27.843 "name": "raid_bdev1", 00:18:27.843 "uuid": "99e4261f-18a6-4344-b2cb-f2a2ef4befa8", 00:18:27.843 "strip_size_kb": 0, 00:18:27.843 "state": "online", 00:18:27.843 "raid_level": "raid1", 00:18:27.843 "superblock": true, 00:18:27.843 "num_base_bdevs": 2, 00:18:27.843 "num_base_bdevs_discovered": 2, 00:18:27.843 "num_base_bdevs_operational": 2, 00:18:27.843 "base_bdevs_list": [ 00:18:27.843 { 00:18:27.843 "name": "spare", 00:18:27.843 "uuid": "fee088f2-bd54-5080-9cde-44fc6b226514", 00:18:27.843 "is_configured": true, 00:18:27.843 "data_offset": 256, 00:18:27.843 "data_size": 7936 00:18:27.843 }, 00:18:27.843 { 00:18:27.843 "name": "BaseBdev2", 00:18:27.843 "uuid": "d864965c-24ad-5eb9-9fd3-6811c64950c5", 00:18:27.843 "is_configured": true, 00:18:27.843 "data_offset": 256, 00:18:27.843 "data_size": 7936 00:18:27.843 } 00:18:27.843 ] 00:18:27.843 }' 00:18:27.843 15:26:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:27.843 15:26:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:27.843 15:26:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:27.843 15:26:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:27.843 15:26:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:27.843 15:26:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.843 15:26:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:27.843 15:26:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:18:27.843 15:26:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.843 15:26:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:18:27.843 15:26:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:27.843 15:26:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.843 15:26:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:27.843 [2024-11-20 15:26:14.234929] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:27.843 15:26:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.843 15:26:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:27.843 15:26:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:27.843 15:26:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:27.843 15:26:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:27.843 15:26:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:27.843 15:26:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:27.843 15:26:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:27.843 15:26:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:27.843 15:26:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:27.843 15:26:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:27.843 15:26:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:27.843 15:26:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:27.843 15:26:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.843 15:26:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:27.843 15:26:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.843 15:26:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:27.843 "name": "raid_bdev1", 00:18:27.843 "uuid": "99e4261f-18a6-4344-b2cb-f2a2ef4befa8", 00:18:27.843 "strip_size_kb": 0, 00:18:27.843 "state": "online", 00:18:27.843 "raid_level": "raid1", 00:18:27.843 "superblock": true, 00:18:27.843 "num_base_bdevs": 2, 00:18:27.843 "num_base_bdevs_discovered": 1, 00:18:27.843 "num_base_bdevs_operational": 1, 00:18:27.843 "base_bdevs_list": [ 00:18:27.843 { 00:18:27.843 "name": null, 00:18:27.843 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:27.843 "is_configured": false, 00:18:27.843 "data_offset": 0, 00:18:27.843 "data_size": 7936 00:18:27.843 }, 00:18:27.843 { 00:18:27.843 "name": "BaseBdev2", 00:18:27.843 "uuid": "d864965c-24ad-5eb9-9fd3-6811c64950c5", 00:18:27.843 "is_configured": true, 00:18:27.843 "data_offset": 256, 00:18:27.843 "data_size": 7936 00:18:27.843 } 00:18:27.843 ] 00:18:27.843 }' 00:18:27.843 15:26:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:27.843 15:26:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:28.412 15:26:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:28.413 15:26:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.413 15:26:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:28.413 [2024-11-20 15:26:14.650911] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:28.413 [2024-11-20 15:26:14.651118] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:28.413 [2024-11-20 15:26:14.651138] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:28.413 [2024-11-20 15:26:14.651190] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:28.413 [2024-11-20 15:26:14.667627] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:18:28.413 15:26:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.413 15:26:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@757 -- # sleep 1 00:18:28.413 [2024-11-20 15:26:14.669920] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:29.353 15:26:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:29.353 15:26:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:29.353 15:26:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:29.353 15:26:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:29.353 15:26:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:29.353 15:26:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:29.353 15:26:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.353 15:26:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:29.353 15:26:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:29.353 15:26:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.353 15:26:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:29.353 "name": "raid_bdev1", 00:18:29.353 "uuid": "99e4261f-18a6-4344-b2cb-f2a2ef4befa8", 00:18:29.353 "strip_size_kb": 0, 00:18:29.353 "state": "online", 00:18:29.353 "raid_level": "raid1", 00:18:29.353 "superblock": true, 00:18:29.353 "num_base_bdevs": 2, 00:18:29.353 "num_base_bdevs_discovered": 2, 00:18:29.353 "num_base_bdevs_operational": 2, 00:18:29.353 "process": { 00:18:29.353 "type": "rebuild", 00:18:29.353 "target": "spare", 00:18:29.353 "progress": { 00:18:29.353 "blocks": 2560, 00:18:29.353 "percent": 32 00:18:29.353 } 00:18:29.353 }, 00:18:29.353 "base_bdevs_list": [ 00:18:29.353 { 00:18:29.353 "name": "spare", 00:18:29.353 "uuid": "fee088f2-bd54-5080-9cde-44fc6b226514", 00:18:29.353 "is_configured": true, 00:18:29.353 "data_offset": 256, 00:18:29.353 "data_size": 7936 00:18:29.353 }, 00:18:29.353 { 00:18:29.353 "name": "BaseBdev2", 00:18:29.353 "uuid": "d864965c-24ad-5eb9-9fd3-6811c64950c5", 00:18:29.353 "is_configured": true, 00:18:29.353 "data_offset": 256, 00:18:29.353 "data_size": 7936 00:18:29.353 } 00:18:29.353 ] 00:18:29.353 }' 00:18:29.353 15:26:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:29.353 15:26:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:29.353 15:26:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:29.353 15:26:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:29.353 15:26:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:18:29.353 15:26:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.353 15:26:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:29.353 [2024-11-20 15:26:15.825463] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:29.613 [2024-11-20 15:26:15.875535] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:29.613 [2024-11-20 15:26:15.875929] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:29.613 [2024-11-20 15:26:15.876028] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:29.613 [2024-11-20 15:26:15.876116] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:29.613 15:26:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.613 15:26:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:29.614 15:26:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:29.614 15:26:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:29.614 15:26:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:29.614 15:26:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:29.614 15:26:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:29.614 15:26:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:29.614 15:26:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:29.614 15:26:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:29.614 15:26:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:29.614 15:26:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:29.614 15:26:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.614 15:26:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:29.614 15:26:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:29.614 15:26:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.614 15:26:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:29.614 "name": "raid_bdev1", 00:18:29.614 "uuid": "99e4261f-18a6-4344-b2cb-f2a2ef4befa8", 00:18:29.614 "strip_size_kb": 0, 00:18:29.614 "state": "online", 00:18:29.614 "raid_level": "raid1", 00:18:29.614 "superblock": true, 00:18:29.614 "num_base_bdevs": 2, 00:18:29.614 "num_base_bdevs_discovered": 1, 00:18:29.614 "num_base_bdevs_operational": 1, 00:18:29.614 "base_bdevs_list": [ 00:18:29.614 { 00:18:29.614 "name": null, 00:18:29.614 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:29.614 "is_configured": false, 00:18:29.614 "data_offset": 0, 00:18:29.614 "data_size": 7936 00:18:29.614 }, 00:18:29.614 { 00:18:29.614 "name": "BaseBdev2", 00:18:29.614 "uuid": "d864965c-24ad-5eb9-9fd3-6811c64950c5", 00:18:29.614 "is_configured": true, 00:18:29.614 "data_offset": 256, 00:18:29.614 "data_size": 7936 00:18:29.614 } 00:18:29.614 ] 00:18:29.614 }' 00:18:29.614 15:26:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:29.614 15:26:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:30.183 15:26:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:30.183 15:26:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.183 15:26:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:30.183 [2024-11-20 15:26:16.388576] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:30.183 [2024-11-20 15:26:16.388684] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:30.183 [2024-11-20 15:26:16.388715] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:18:30.183 [2024-11-20 15:26:16.388730] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:30.183 [2024-11-20 15:26:16.388930] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:30.183 [2024-11-20 15:26:16.388948] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:30.183 [2024-11-20 15:26:16.389011] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:30.183 [2024-11-20 15:26:16.389027] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:30.183 [2024-11-20 15:26:16.389038] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:30.183 [2024-11-20 15:26:16.389062] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:30.183 [2024-11-20 15:26:16.405072] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:18:30.183 spare 00:18:30.183 15:26:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.183 15:26:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@764 -- # sleep 1 00:18:30.183 [2024-11-20 15:26:16.407232] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:31.119 15:26:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:31.119 15:26:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:31.119 15:26:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:31.119 15:26:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:31.119 15:26:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:31.119 15:26:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:31.119 15:26:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.119 15:26:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:31.119 15:26:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:31.119 15:26:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.119 15:26:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:31.119 "name": "raid_bdev1", 00:18:31.119 "uuid": "99e4261f-18a6-4344-b2cb-f2a2ef4befa8", 00:18:31.119 "strip_size_kb": 0, 00:18:31.119 "state": "online", 00:18:31.119 "raid_level": "raid1", 00:18:31.119 "superblock": true, 00:18:31.119 "num_base_bdevs": 2, 00:18:31.119 "num_base_bdevs_discovered": 2, 00:18:31.119 "num_base_bdevs_operational": 2, 00:18:31.119 "process": { 00:18:31.119 "type": "rebuild", 00:18:31.119 "target": "spare", 00:18:31.119 "progress": { 00:18:31.119 "blocks": 2560, 00:18:31.119 "percent": 32 00:18:31.119 } 00:18:31.119 }, 00:18:31.119 "base_bdevs_list": [ 00:18:31.119 { 00:18:31.119 "name": "spare", 00:18:31.119 "uuid": "fee088f2-bd54-5080-9cde-44fc6b226514", 00:18:31.119 "is_configured": true, 00:18:31.119 "data_offset": 256, 00:18:31.119 "data_size": 7936 00:18:31.119 }, 00:18:31.119 { 00:18:31.119 "name": "BaseBdev2", 00:18:31.119 "uuid": "d864965c-24ad-5eb9-9fd3-6811c64950c5", 00:18:31.119 "is_configured": true, 00:18:31.119 "data_offset": 256, 00:18:31.119 "data_size": 7936 00:18:31.119 } 00:18:31.119 ] 00:18:31.119 }' 00:18:31.119 15:26:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:31.119 15:26:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:31.120 15:26:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:31.120 15:26:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:31.120 15:26:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:18:31.120 15:26:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.120 15:26:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:31.120 [2024-11-20 15:26:17.551232] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:31.379 [2024-11-20 15:26:17.612827] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:31.380 [2024-11-20 15:26:17.613193] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:31.380 [2024-11-20 15:26:17.613307] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:31.380 [2024-11-20 15:26:17.613345] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:31.380 15:26:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.380 15:26:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:31.380 15:26:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:31.380 15:26:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:31.380 15:26:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:31.380 15:26:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:31.380 15:26:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:31.380 15:26:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:31.380 15:26:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:31.380 15:26:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:31.380 15:26:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:31.380 15:26:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:31.380 15:26:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.380 15:26:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:31.380 15:26:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:31.380 15:26:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.380 15:26:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:31.380 "name": "raid_bdev1", 00:18:31.380 "uuid": "99e4261f-18a6-4344-b2cb-f2a2ef4befa8", 00:18:31.380 "strip_size_kb": 0, 00:18:31.380 "state": "online", 00:18:31.380 "raid_level": "raid1", 00:18:31.380 "superblock": true, 00:18:31.380 "num_base_bdevs": 2, 00:18:31.380 "num_base_bdevs_discovered": 1, 00:18:31.380 "num_base_bdevs_operational": 1, 00:18:31.380 "base_bdevs_list": [ 00:18:31.380 { 00:18:31.380 "name": null, 00:18:31.380 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:31.380 "is_configured": false, 00:18:31.380 "data_offset": 0, 00:18:31.380 "data_size": 7936 00:18:31.380 }, 00:18:31.380 { 00:18:31.380 "name": "BaseBdev2", 00:18:31.380 "uuid": "d864965c-24ad-5eb9-9fd3-6811c64950c5", 00:18:31.380 "is_configured": true, 00:18:31.380 "data_offset": 256, 00:18:31.380 "data_size": 7936 00:18:31.380 } 00:18:31.380 ] 00:18:31.380 }' 00:18:31.380 15:26:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:31.380 15:26:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:31.639 15:26:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:31.639 15:26:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:31.639 15:26:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:31.639 15:26:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:31.639 15:26:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:31.639 15:26:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:31.639 15:26:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:31.639 15:26:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.639 15:26:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:31.639 15:26:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.898 15:26:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:31.898 "name": "raid_bdev1", 00:18:31.898 "uuid": "99e4261f-18a6-4344-b2cb-f2a2ef4befa8", 00:18:31.898 "strip_size_kb": 0, 00:18:31.898 "state": "online", 00:18:31.898 "raid_level": "raid1", 00:18:31.898 "superblock": true, 00:18:31.898 "num_base_bdevs": 2, 00:18:31.898 "num_base_bdevs_discovered": 1, 00:18:31.898 "num_base_bdevs_operational": 1, 00:18:31.898 "base_bdevs_list": [ 00:18:31.898 { 00:18:31.898 "name": null, 00:18:31.898 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:31.898 "is_configured": false, 00:18:31.898 "data_offset": 0, 00:18:31.898 "data_size": 7936 00:18:31.898 }, 00:18:31.898 { 00:18:31.898 "name": "BaseBdev2", 00:18:31.898 "uuid": "d864965c-24ad-5eb9-9fd3-6811c64950c5", 00:18:31.898 "is_configured": true, 00:18:31.898 "data_offset": 256, 00:18:31.898 "data_size": 7936 00:18:31.898 } 00:18:31.898 ] 00:18:31.898 }' 00:18:31.898 15:26:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:31.899 15:26:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:31.899 15:26:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:31.899 15:26:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:31.899 15:26:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:18:31.899 15:26:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.899 15:26:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:31.899 15:26:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.899 15:26:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:31.899 15:26:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.899 15:26:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:31.899 [2024-11-20 15:26:18.245710] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:31.899 [2024-11-20 15:26:18.245979] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:31.899 [2024-11-20 15:26:18.246014] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:18:31.899 [2024-11-20 15:26:18.246026] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:31.899 [2024-11-20 15:26:18.246214] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:31.899 [2024-11-20 15:26:18.246230] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:31.899 [2024-11-20 15:26:18.246290] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:18:31.899 [2024-11-20 15:26:18.246304] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:31.899 [2024-11-20 15:26:18.246316] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:31.899 [2024-11-20 15:26:18.246328] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:18:31.899 BaseBdev1 00:18:31.899 15:26:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.899 15:26:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@775 -- # sleep 1 00:18:32.855 15:26:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:32.855 15:26:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:32.855 15:26:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:32.855 15:26:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:32.855 15:26:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:32.855 15:26:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:32.855 15:26:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:32.855 15:26:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:32.855 15:26:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:32.855 15:26:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:32.855 15:26:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:32.855 15:26:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:32.855 15:26:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.855 15:26:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:32.855 15:26:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.856 15:26:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:32.856 "name": "raid_bdev1", 00:18:32.856 "uuid": "99e4261f-18a6-4344-b2cb-f2a2ef4befa8", 00:18:32.856 "strip_size_kb": 0, 00:18:32.856 "state": "online", 00:18:32.856 "raid_level": "raid1", 00:18:32.856 "superblock": true, 00:18:32.856 "num_base_bdevs": 2, 00:18:32.856 "num_base_bdevs_discovered": 1, 00:18:32.856 "num_base_bdevs_operational": 1, 00:18:32.856 "base_bdevs_list": [ 00:18:32.856 { 00:18:32.856 "name": null, 00:18:32.856 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:32.856 "is_configured": false, 00:18:32.856 "data_offset": 0, 00:18:32.856 "data_size": 7936 00:18:32.856 }, 00:18:32.856 { 00:18:32.856 "name": "BaseBdev2", 00:18:32.856 "uuid": "d864965c-24ad-5eb9-9fd3-6811c64950c5", 00:18:32.856 "is_configured": true, 00:18:32.856 "data_offset": 256, 00:18:32.856 "data_size": 7936 00:18:32.856 } 00:18:32.856 ] 00:18:32.856 }' 00:18:32.856 15:26:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:32.856 15:26:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:33.430 15:26:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:33.430 15:26:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:33.430 15:26:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:33.430 15:26:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:33.430 15:26:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:33.430 15:26:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:33.430 15:26:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.430 15:26:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:33.430 15:26:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:33.430 15:26:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.430 15:26:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:33.430 "name": "raid_bdev1", 00:18:33.430 "uuid": "99e4261f-18a6-4344-b2cb-f2a2ef4befa8", 00:18:33.430 "strip_size_kb": 0, 00:18:33.430 "state": "online", 00:18:33.430 "raid_level": "raid1", 00:18:33.430 "superblock": true, 00:18:33.430 "num_base_bdevs": 2, 00:18:33.430 "num_base_bdevs_discovered": 1, 00:18:33.430 "num_base_bdevs_operational": 1, 00:18:33.430 "base_bdevs_list": [ 00:18:33.430 { 00:18:33.430 "name": null, 00:18:33.430 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:33.430 "is_configured": false, 00:18:33.430 "data_offset": 0, 00:18:33.430 "data_size": 7936 00:18:33.430 }, 00:18:33.430 { 00:18:33.430 "name": "BaseBdev2", 00:18:33.430 "uuid": "d864965c-24ad-5eb9-9fd3-6811c64950c5", 00:18:33.430 "is_configured": true, 00:18:33.430 "data_offset": 256, 00:18:33.430 "data_size": 7936 00:18:33.430 } 00:18:33.430 ] 00:18:33.430 }' 00:18:33.430 15:26:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:33.430 15:26:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:33.430 15:26:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:33.430 15:26:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:33.430 15:26:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:33.430 15:26:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@652 -- # local es=0 00:18:33.430 15:26:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:33.430 15:26:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:33.430 15:26:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:33.430 15:26:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:33.430 15:26:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:33.430 15:26:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:33.430 15:26:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.430 15:26:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:33.430 [2024-11-20 15:26:19.831582] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:33.430 [2024-11-20 15:26:19.831766] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:33.430 [2024-11-20 15:26:19.831788] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:33.430 request: 00:18:33.430 { 00:18:33.430 "base_bdev": "BaseBdev1", 00:18:33.430 "raid_bdev": "raid_bdev1", 00:18:33.430 "method": "bdev_raid_add_base_bdev", 00:18:33.430 "req_id": 1 00:18:33.430 } 00:18:33.430 Got JSON-RPC error response 00:18:33.430 response: 00:18:33.430 { 00:18:33.430 "code": -22, 00:18:33.430 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:18:33.430 } 00:18:33.430 15:26:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:33.430 15:26:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@655 -- # es=1 00:18:33.430 15:26:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:33.430 15:26:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:33.430 15:26:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:33.430 15:26:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@779 -- # sleep 1 00:18:34.367 15:26:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:34.367 15:26:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:34.367 15:26:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:34.367 15:26:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:34.367 15:26:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:34.367 15:26:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:34.367 15:26:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:34.367 15:26:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:34.367 15:26:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:34.367 15:26:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:34.625 15:26:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:34.625 15:26:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.625 15:26:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:34.625 15:26:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:34.625 15:26:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.625 15:26:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:34.625 "name": "raid_bdev1", 00:18:34.625 "uuid": "99e4261f-18a6-4344-b2cb-f2a2ef4befa8", 00:18:34.625 "strip_size_kb": 0, 00:18:34.625 "state": "online", 00:18:34.625 "raid_level": "raid1", 00:18:34.625 "superblock": true, 00:18:34.625 "num_base_bdevs": 2, 00:18:34.625 "num_base_bdevs_discovered": 1, 00:18:34.625 "num_base_bdevs_operational": 1, 00:18:34.625 "base_bdevs_list": [ 00:18:34.625 { 00:18:34.625 "name": null, 00:18:34.625 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:34.625 "is_configured": false, 00:18:34.625 "data_offset": 0, 00:18:34.625 "data_size": 7936 00:18:34.626 }, 00:18:34.626 { 00:18:34.626 "name": "BaseBdev2", 00:18:34.626 "uuid": "d864965c-24ad-5eb9-9fd3-6811c64950c5", 00:18:34.626 "is_configured": true, 00:18:34.626 "data_offset": 256, 00:18:34.626 "data_size": 7936 00:18:34.626 } 00:18:34.626 ] 00:18:34.626 }' 00:18:34.626 15:26:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:34.626 15:26:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:34.958 15:26:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:34.958 15:26:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:34.958 15:26:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:34.958 15:26:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:34.958 15:26:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:34.958 15:26:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:34.958 15:26:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.958 15:26:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:34.958 15:26:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:34.958 15:26:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.958 15:26:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:34.958 "name": "raid_bdev1", 00:18:34.958 "uuid": "99e4261f-18a6-4344-b2cb-f2a2ef4befa8", 00:18:34.958 "strip_size_kb": 0, 00:18:34.958 "state": "online", 00:18:34.958 "raid_level": "raid1", 00:18:34.958 "superblock": true, 00:18:34.958 "num_base_bdevs": 2, 00:18:34.958 "num_base_bdevs_discovered": 1, 00:18:34.958 "num_base_bdevs_operational": 1, 00:18:34.958 "base_bdevs_list": [ 00:18:34.958 { 00:18:34.958 "name": null, 00:18:34.958 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:34.958 "is_configured": false, 00:18:34.958 "data_offset": 0, 00:18:34.958 "data_size": 7936 00:18:34.958 }, 00:18:34.958 { 00:18:34.958 "name": "BaseBdev2", 00:18:34.958 "uuid": "d864965c-24ad-5eb9-9fd3-6811c64950c5", 00:18:34.958 "is_configured": true, 00:18:34.958 "data_offset": 256, 00:18:34.958 "data_size": 7936 00:18:34.958 } 00:18:34.958 ] 00:18:34.958 }' 00:18:34.958 15:26:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:34.958 15:26:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:34.958 15:26:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:34.958 15:26:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:34.958 15:26:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@784 -- # killprocess 88819 00:18:34.958 15:26:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 88819 ']' 00:18:34.958 15:26:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 88819 00:18:34.958 15:26:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:18:35.218 15:26:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:35.218 15:26:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88819 00:18:35.218 killing process with pid 88819 00:18:35.218 Received shutdown signal, test time was about 60.000000 seconds 00:18:35.218 00:18:35.218 Latency(us) 00:18:35.218 [2024-11-20T15:26:21.700Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:35.218 [2024-11-20T15:26:21.700Z] =================================================================================================================== 00:18:35.218 [2024-11-20T15:26:21.700Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:35.218 15:26:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:35.218 15:26:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:35.218 15:26:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88819' 00:18:35.218 15:26:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@973 -- # kill 88819 00:18:35.218 15:26:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@978 -- # wait 88819 00:18:35.218 [2024-11-20 15:26:21.439021] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:35.218 [2024-11-20 15:26:21.439162] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:35.218 [2024-11-20 15:26:21.439214] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:35.218 [2024-11-20 15:26:21.439229] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:18:35.477 [2024-11-20 15:26:21.755341] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:36.416 15:26:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@786 -- # return 0 00:18:36.416 00:18:36.416 real 0m17.446s 00:18:36.416 user 0m22.776s 00:18:36.416 sys 0m1.766s 00:18:36.416 15:26:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:36.416 ************************************ 00:18:36.416 END TEST raid_rebuild_test_sb_md_interleaved 00:18:36.416 15:26:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:36.416 ************************************ 00:18:36.675 15:26:22 bdev_raid -- bdev/bdev_raid.sh@1015 -- # trap - EXIT 00:18:36.675 15:26:22 bdev_raid -- bdev/bdev_raid.sh@1016 -- # cleanup 00:18:36.675 15:26:22 bdev_raid -- bdev/bdev_raid.sh@56 -- # '[' -n 88819 ']' 00:18:36.675 15:26:22 bdev_raid -- bdev/bdev_raid.sh@56 -- # ps -p 88819 00:18:36.675 15:26:22 bdev_raid -- bdev/bdev_raid.sh@60 -- # rm -rf /raidtest 00:18:36.675 00:18:36.675 real 11m56.751s 00:18:36.675 user 16m0.790s 00:18:36.675 sys 2m4.905s 00:18:36.675 15:26:22 bdev_raid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:36.675 15:26:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:36.675 ************************************ 00:18:36.675 END TEST bdev_raid 00:18:36.675 ************************************ 00:18:36.675 15:26:23 -- spdk/autotest.sh@190 -- # run_test spdkcli_raid /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:18:36.675 15:26:23 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:36.675 15:26:23 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:36.675 15:26:23 -- common/autotest_common.sh@10 -- # set +x 00:18:36.675 ************************************ 00:18:36.675 START TEST spdkcli_raid 00:18:36.675 ************************************ 00:18:36.675 15:26:23 spdkcli_raid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:18:36.935 * Looking for test storage... 00:18:36.935 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:18:36.935 15:26:23 spdkcli_raid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:36.935 15:26:23 spdkcli_raid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:36.935 15:26:23 spdkcli_raid -- common/autotest_common.sh@1693 -- # lcov --version 00:18:36.935 15:26:23 spdkcli_raid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:36.935 15:26:23 spdkcli_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:36.935 15:26:23 spdkcli_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:36.935 15:26:23 spdkcli_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:36.935 15:26:23 spdkcli_raid -- scripts/common.sh@336 -- # IFS=.-: 00:18:36.935 15:26:23 spdkcli_raid -- scripts/common.sh@336 -- # read -ra ver1 00:18:36.935 15:26:23 spdkcli_raid -- scripts/common.sh@337 -- # IFS=.-: 00:18:36.935 15:26:23 spdkcli_raid -- scripts/common.sh@337 -- # read -ra ver2 00:18:36.935 15:26:23 spdkcli_raid -- scripts/common.sh@338 -- # local 'op=<' 00:18:36.935 15:26:23 spdkcli_raid -- scripts/common.sh@340 -- # ver1_l=2 00:18:36.935 15:26:23 spdkcli_raid -- scripts/common.sh@341 -- # ver2_l=1 00:18:36.936 15:26:23 spdkcli_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:36.936 15:26:23 spdkcli_raid -- scripts/common.sh@344 -- # case "$op" in 00:18:36.936 15:26:23 spdkcli_raid -- scripts/common.sh@345 -- # : 1 00:18:36.936 15:26:23 spdkcli_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:36.936 15:26:23 spdkcli_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:36.936 15:26:23 spdkcli_raid -- scripts/common.sh@365 -- # decimal 1 00:18:36.936 15:26:23 spdkcli_raid -- scripts/common.sh@353 -- # local d=1 00:18:36.936 15:26:23 spdkcli_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:36.936 15:26:23 spdkcli_raid -- scripts/common.sh@355 -- # echo 1 00:18:36.936 15:26:23 spdkcli_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:18:36.936 15:26:23 spdkcli_raid -- scripts/common.sh@366 -- # decimal 2 00:18:36.936 15:26:23 spdkcli_raid -- scripts/common.sh@353 -- # local d=2 00:18:36.936 15:26:23 spdkcli_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:36.936 15:26:23 spdkcli_raid -- scripts/common.sh@355 -- # echo 2 00:18:36.936 15:26:23 spdkcli_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:18:36.936 15:26:23 spdkcli_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:36.936 15:26:23 spdkcli_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:36.936 15:26:23 spdkcli_raid -- scripts/common.sh@368 -- # return 0 00:18:36.936 15:26:23 spdkcli_raid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:36.936 15:26:23 spdkcli_raid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:36.936 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:36.936 --rc genhtml_branch_coverage=1 00:18:36.936 --rc genhtml_function_coverage=1 00:18:36.936 --rc genhtml_legend=1 00:18:36.936 --rc geninfo_all_blocks=1 00:18:36.936 --rc geninfo_unexecuted_blocks=1 00:18:36.936 00:18:36.936 ' 00:18:36.936 15:26:23 spdkcli_raid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:36.936 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:36.936 --rc genhtml_branch_coverage=1 00:18:36.936 --rc genhtml_function_coverage=1 00:18:36.936 --rc genhtml_legend=1 00:18:36.936 --rc geninfo_all_blocks=1 00:18:36.936 --rc geninfo_unexecuted_blocks=1 00:18:36.936 00:18:36.936 ' 00:18:36.936 15:26:23 spdkcli_raid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:36.936 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:36.936 --rc genhtml_branch_coverage=1 00:18:36.936 --rc genhtml_function_coverage=1 00:18:36.936 --rc genhtml_legend=1 00:18:36.936 --rc geninfo_all_blocks=1 00:18:36.936 --rc geninfo_unexecuted_blocks=1 00:18:36.936 00:18:36.936 ' 00:18:36.936 15:26:23 spdkcli_raid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:36.936 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:36.936 --rc genhtml_branch_coverage=1 00:18:36.936 --rc genhtml_function_coverage=1 00:18:36.936 --rc genhtml_legend=1 00:18:36.936 --rc geninfo_all_blocks=1 00:18:36.936 --rc geninfo_unexecuted_blocks=1 00:18:36.936 00:18:36.936 ' 00:18:36.936 15:26:23 spdkcli_raid -- spdkcli/raid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:18:36.936 15:26:23 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:18:36.936 15:26:23 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:18:36.936 15:26:23 spdkcli_raid -- spdkcli/raid.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:18:36.936 15:26:23 spdkcli_raid -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:18:36.936 15:26:23 spdkcli_raid -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:18:36.936 15:26:23 spdkcli_raid -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:18:36.936 15:26:23 spdkcli_raid -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:18:36.936 15:26:23 spdkcli_raid -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:18:36.936 15:26:23 spdkcli_raid -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:18:36.936 15:26:23 spdkcli_raid -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:18:36.936 15:26:23 spdkcli_raid -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:18:36.936 15:26:23 spdkcli_raid -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:18:36.936 15:26:23 spdkcli_raid -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:18:36.936 15:26:23 spdkcli_raid -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:18:36.936 15:26:23 spdkcli_raid -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:18:36.936 15:26:23 spdkcli_raid -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:18:36.936 15:26:23 spdkcli_raid -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:18:36.936 15:26:23 spdkcli_raid -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:18:36.936 15:26:23 spdkcli_raid -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:18:36.936 15:26:23 spdkcli_raid -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:18:36.936 15:26:23 spdkcli_raid -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:18:36.936 15:26:23 spdkcli_raid -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:18:36.936 15:26:23 spdkcli_raid -- spdkcli/raid.sh@12 -- # MATCH_FILE=spdkcli_raid.test 00:18:36.936 15:26:23 spdkcli_raid -- spdkcli/raid.sh@13 -- # SPDKCLI_BRANCH=/bdevs 00:18:36.936 15:26:23 spdkcli_raid -- spdkcli/raid.sh@14 -- # dirname /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:18:36.936 15:26:23 spdkcli_raid -- spdkcli/raid.sh@14 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/spdkcli 00:18:36.936 15:26:23 spdkcli_raid -- spdkcli/raid.sh@14 -- # testdir=/home/vagrant/spdk_repo/spdk/test/spdkcli 00:18:36.936 15:26:23 spdkcli_raid -- spdkcli/raid.sh@15 -- # . /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:18:36.936 15:26:23 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:18:36.936 15:26:23 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:18:36.936 15:26:23 spdkcli_raid -- spdkcli/raid.sh@17 -- # trap cleanup EXIT 00:18:36.936 15:26:23 spdkcli_raid -- spdkcli/raid.sh@19 -- # timing_enter run_spdk_tgt 00:18:36.936 15:26:23 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:36.936 15:26:23 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:36.936 15:26:23 spdkcli_raid -- spdkcli/raid.sh@20 -- # run_spdk_tgt 00:18:36.936 15:26:23 spdkcli_raid -- spdkcli/common.sh@27 -- # spdk_tgt_pid=89500 00:18:36.936 15:26:23 spdkcli_raid -- spdkcli/common.sh@28 -- # waitforlisten 89500 00:18:36.936 15:26:23 spdkcli_raid -- common/autotest_common.sh@835 -- # '[' -z 89500 ']' 00:18:36.936 15:26:23 spdkcli_raid -- spdkcli/common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:18:36.936 15:26:23 spdkcli_raid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:36.936 15:26:23 spdkcli_raid -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:36.936 15:26:23 spdkcli_raid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:36.936 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:36.936 15:26:23 spdkcli_raid -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:36.936 15:26:23 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:37.195 [2024-11-20 15:26:23.447266] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:18:37.195 [2024-11-20 15:26:23.447394] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89500 ] 00:18:37.195 [2024-11-20 15:26:23.629969] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:37.465 [2024-11-20 15:26:23.760207] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:37.465 [2024-11-20 15:26:23.760218] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:38.402 15:26:24 spdkcli_raid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:38.402 15:26:24 spdkcli_raid -- common/autotest_common.sh@868 -- # return 0 00:18:38.402 15:26:24 spdkcli_raid -- spdkcli/raid.sh@21 -- # timing_exit run_spdk_tgt 00:18:38.402 15:26:24 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:38.402 15:26:24 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:38.402 15:26:24 spdkcli_raid -- spdkcli/raid.sh@23 -- # timing_enter spdkcli_create_malloc 00:18:38.402 15:26:24 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:38.402 15:26:24 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:38.402 15:26:24 spdkcli_raid -- spdkcli/raid.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 8 512 Malloc1'\'' '\''Malloc1'\'' True 00:18:38.402 '\''/bdevs/malloc create 8 512 Malloc2'\'' '\''Malloc2'\'' True 00:18:38.402 ' 00:18:39.780 Executing command: ['/bdevs/malloc create 8 512 Malloc1', 'Malloc1', True] 00:18:39.780 Executing command: ['/bdevs/malloc create 8 512 Malloc2', 'Malloc2', True] 00:18:40.039 15:26:26 spdkcli_raid -- spdkcli/raid.sh@27 -- # timing_exit spdkcli_create_malloc 00:18:40.039 15:26:26 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:40.039 15:26:26 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:40.039 15:26:26 spdkcli_raid -- spdkcli/raid.sh@29 -- # timing_enter spdkcli_create_raid 00:18:40.039 15:26:26 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:40.039 15:26:26 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:40.039 15:26:26 spdkcli_raid -- spdkcli/raid.sh@31 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4'\'' '\''testraid'\'' True 00:18:40.039 ' 00:18:40.977 Executing command: ['/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4', 'testraid', True] 00:18:41.236 15:26:27 spdkcli_raid -- spdkcli/raid.sh@32 -- # timing_exit spdkcli_create_raid 00:18:41.236 15:26:27 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:41.236 15:26:27 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:41.236 15:26:27 spdkcli_raid -- spdkcli/raid.sh@34 -- # timing_enter spdkcli_check_match 00:18:41.236 15:26:27 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:41.236 15:26:27 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:41.236 15:26:27 spdkcli_raid -- spdkcli/raid.sh@35 -- # check_match 00:18:41.236 15:26:27 spdkcli_raid -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /bdevs 00:18:41.805 15:26:28 spdkcli_raid -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test.match 00:18:41.805 15:26:28 spdkcli_raid -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test 00:18:41.805 15:26:28 spdkcli_raid -- spdkcli/raid.sh@36 -- # timing_exit spdkcli_check_match 00:18:41.805 15:26:28 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:41.805 15:26:28 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:41.805 15:26:28 spdkcli_raid -- spdkcli/raid.sh@38 -- # timing_enter spdkcli_delete_raid 00:18:41.805 15:26:28 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:41.805 15:26:28 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:41.805 15:26:28 spdkcli_raid -- spdkcli/raid.sh@40 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume delete testraid'\'' '\'''\'' True 00:18:41.805 ' 00:18:42.741 Executing command: ['/bdevs/raid_volume delete testraid', '', True] 00:18:43.010 15:26:29 spdkcli_raid -- spdkcli/raid.sh@41 -- # timing_exit spdkcli_delete_raid 00:18:43.010 15:26:29 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:43.010 15:26:29 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:43.010 15:26:29 spdkcli_raid -- spdkcli/raid.sh@43 -- # timing_enter spdkcli_delete_malloc 00:18:43.010 15:26:29 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:43.010 15:26:29 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:43.010 15:26:29 spdkcli_raid -- spdkcli/raid.sh@46 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc delete Malloc1'\'' '\'''\'' True 00:18:43.010 '\''/bdevs/malloc delete Malloc2'\'' '\'''\'' True 00:18:43.010 ' 00:18:44.389 Executing command: ['/bdevs/malloc delete Malloc1', '', True] 00:18:44.389 Executing command: ['/bdevs/malloc delete Malloc2', '', True] 00:18:44.389 15:26:30 spdkcli_raid -- spdkcli/raid.sh@47 -- # timing_exit spdkcli_delete_malloc 00:18:44.389 15:26:30 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:44.389 15:26:30 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:44.389 15:26:30 spdkcli_raid -- spdkcli/raid.sh@49 -- # killprocess 89500 00:18:44.389 15:26:30 spdkcli_raid -- common/autotest_common.sh@954 -- # '[' -z 89500 ']' 00:18:44.389 15:26:30 spdkcli_raid -- common/autotest_common.sh@958 -- # kill -0 89500 00:18:44.389 15:26:30 spdkcli_raid -- common/autotest_common.sh@959 -- # uname 00:18:44.389 15:26:30 spdkcli_raid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:44.389 15:26:30 spdkcli_raid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89500 00:18:44.648 15:26:30 spdkcli_raid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:44.648 15:26:30 spdkcli_raid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:44.648 15:26:30 spdkcli_raid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89500' 00:18:44.648 killing process with pid 89500 00:18:44.648 15:26:30 spdkcli_raid -- common/autotest_common.sh@973 -- # kill 89500 00:18:44.648 15:26:30 spdkcli_raid -- common/autotest_common.sh@978 -- # wait 89500 00:18:47.190 15:26:33 spdkcli_raid -- spdkcli/raid.sh@1 -- # cleanup 00:18:47.190 15:26:33 spdkcli_raid -- spdkcli/common.sh@10 -- # '[' -n 89500 ']' 00:18:47.190 15:26:33 spdkcli_raid -- spdkcli/common.sh@11 -- # killprocess 89500 00:18:47.190 15:26:33 spdkcli_raid -- common/autotest_common.sh@954 -- # '[' -z 89500 ']' 00:18:47.190 15:26:33 spdkcli_raid -- common/autotest_common.sh@958 -- # kill -0 89500 00:18:47.190 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (89500) - No such process 00:18:47.190 Process with pid 89500 is not found 00:18:47.190 15:26:33 spdkcli_raid -- common/autotest_common.sh@981 -- # echo 'Process with pid 89500 is not found' 00:18:47.190 15:26:33 spdkcli_raid -- spdkcli/common.sh@13 -- # '[' -n '' ']' 00:18:47.190 15:26:33 spdkcli_raid -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:18:47.190 15:26:33 spdkcli_raid -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:18:47.190 15:26:33 spdkcli_raid -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_raid.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:18:47.190 00:18:47.190 real 0m10.260s 00:18:47.190 user 0m20.998s 00:18:47.190 sys 0m1.176s 00:18:47.190 15:26:33 spdkcli_raid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:47.190 15:26:33 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:47.190 ************************************ 00:18:47.190 END TEST spdkcli_raid 00:18:47.190 ************************************ 00:18:47.190 15:26:33 -- spdk/autotest.sh@191 -- # run_test blockdev_raid5f /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:18:47.190 15:26:33 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:47.190 15:26:33 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:47.190 15:26:33 -- common/autotest_common.sh@10 -- # set +x 00:18:47.190 ************************************ 00:18:47.190 START TEST blockdev_raid5f 00:18:47.190 ************************************ 00:18:47.190 15:26:33 blockdev_raid5f -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:18:47.190 * Looking for test storage... 00:18:47.190 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:18:47.190 15:26:33 blockdev_raid5f -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:47.190 15:26:33 blockdev_raid5f -- common/autotest_common.sh@1693 -- # lcov --version 00:18:47.190 15:26:33 blockdev_raid5f -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:47.190 15:26:33 blockdev_raid5f -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:47.190 15:26:33 blockdev_raid5f -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:47.190 15:26:33 blockdev_raid5f -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:47.190 15:26:33 blockdev_raid5f -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:47.190 15:26:33 blockdev_raid5f -- scripts/common.sh@336 -- # IFS=.-: 00:18:47.190 15:26:33 blockdev_raid5f -- scripts/common.sh@336 -- # read -ra ver1 00:18:47.190 15:26:33 blockdev_raid5f -- scripts/common.sh@337 -- # IFS=.-: 00:18:47.190 15:26:33 blockdev_raid5f -- scripts/common.sh@337 -- # read -ra ver2 00:18:47.190 15:26:33 blockdev_raid5f -- scripts/common.sh@338 -- # local 'op=<' 00:18:47.190 15:26:33 blockdev_raid5f -- scripts/common.sh@340 -- # ver1_l=2 00:18:47.190 15:26:33 blockdev_raid5f -- scripts/common.sh@341 -- # ver2_l=1 00:18:47.190 15:26:33 blockdev_raid5f -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:47.190 15:26:33 blockdev_raid5f -- scripts/common.sh@344 -- # case "$op" in 00:18:47.190 15:26:33 blockdev_raid5f -- scripts/common.sh@345 -- # : 1 00:18:47.190 15:26:33 blockdev_raid5f -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:47.190 15:26:33 blockdev_raid5f -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:47.190 15:26:33 blockdev_raid5f -- scripts/common.sh@365 -- # decimal 1 00:18:47.190 15:26:33 blockdev_raid5f -- scripts/common.sh@353 -- # local d=1 00:18:47.190 15:26:33 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:47.190 15:26:33 blockdev_raid5f -- scripts/common.sh@355 -- # echo 1 00:18:47.190 15:26:33 blockdev_raid5f -- scripts/common.sh@365 -- # ver1[v]=1 00:18:47.190 15:26:33 blockdev_raid5f -- scripts/common.sh@366 -- # decimal 2 00:18:47.190 15:26:33 blockdev_raid5f -- scripts/common.sh@353 -- # local d=2 00:18:47.190 15:26:33 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:47.190 15:26:33 blockdev_raid5f -- scripts/common.sh@355 -- # echo 2 00:18:47.190 15:26:33 blockdev_raid5f -- scripts/common.sh@366 -- # ver2[v]=2 00:18:47.190 15:26:33 blockdev_raid5f -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:47.190 15:26:33 blockdev_raid5f -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:47.190 15:26:33 blockdev_raid5f -- scripts/common.sh@368 -- # return 0 00:18:47.190 15:26:33 blockdev_raid5f -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:47.190 15:26:33 blockdev_raid5f -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:47.190 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:47.190 --rc genhtml_branch_coverage=1 00:18:47.190 --rc genhtml_function_coverage=1 00:18:47.190 --rc genhtml_legend=1 00:18:47.190 --rc geninfo_all_blocks=1 00:18:47.190 --rc geninfo_unexecuted_blocks=1 00:18:47.190 00:18:47.190 ' 00:18:47.190 15:26:33 blockdev_raid5f -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:47.190 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:47.190 --rc genhtml_branch_coverage=1 00:18:47.190 --rc genhtml_function_coverage=1 00:18:47.190 --rc genhtml_legend=1 00:18:47.190 --rc geninfo_all_blocks=1 00:18:47.190 --rc geninfo_unexecuted_blocks=1 00:18:47.190 00:18:47.190 ' 00:18:47.190 15:26:33 blockdev_raid5f -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:47.190 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:47.190 --rc genhtml_branch_coverage=1 00:18:47.190 --rc genhtml_function_coverage=1 00:18:47.190 --rc genhtml_legend=1 00:18:47.190 --rc geninfo_all_blocks=1 00:18:47.190 --rc geninfo_unexecuted_blocks=1 00:18:47.190 00:18:47.190 ' 00:18:47.190 15:26:33 blockdev_raid5f -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:47.190 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:47.190 --rc genhtml_branch_coverage=1 00:18:47.190 --rc genhtml_function_coverage=1 00:18:47.190 --rc genhtml_legend=1 00:18:47.190 --rc geninfo_all_blocks=1 00:18:47.190 --rc geninfo_unexecuted_blocks=1 00:18:47.190 00:18:47.190 ' 00:18:47.190 15:26:33 blockdev_raid5f -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:18:47.190 15:26:33 blockdev_raid5f -- bdev/nbd_common.sh@6 -- # set -e 00:18:47.190 15:26:33 blockdev_raid5f -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:18:47.190 15:26:33 blockdev_raid5f -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:18:47.190 15:26:33 blockdev_raid5f -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:18:47.190 15:26:33 blockdev_raid5f -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:18:47.190 15:26:33 blockdev_raid5f -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:18:47.190 15:26:33 blockdev_raid5f -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:18:47.190 15:26:33 blockdev_raid5f -- bdev/blockdev.sh@20 -- # : 00:18:47.190 15:26:33 blockdev_raid5f -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:18:47.190 15:26:33 blockdev_raid5f -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:18:47.190 15:26:33 blockdev_raid5f -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:18:47.190 15:26:33 blockdev_raid5f -- bdev/blockdev.sh@711 -- # uname -s 00:18:47.190 15:26:33 blockdev_raid5f -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:18:47.190 15:26:33 blockdev_raid5f -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:18:47.190 15:26:33 blockdev_raid5f -- bdev/blockdev.sh@719 -- # test_type=raid5f 00:18:47.190 15:26:33 blockdev_raid5f -- bdev/blockdev.sh@720 -- # crypto_device= 00:18:47.190 15:26:33 blockdev_raid5f -- bdev/blockdev.sh@721 -- # dek= 00:18:47.190 15:26:33 blockdev_raid5f -- bdev/blockdev.sh@722 -- # env_ctx= 00:18:47.190 15:26:33 blockdev_raid5f -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:18:47.190 15:26:33 blockdev_raid5f -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:18:47.190 15:26:33 blockdev_raid5f -- bdev/blockdev.sh@727 -- # [[ raid5f == bdev ]] 00:18:47.190 15:26:33 blockdev_raid5f -- bdev/blockdev.sh@727 -- # [[ raid5f == crypto_* ]] 00:18:47.190 15:26:33 blockdev_raid5f -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:18:47.190 15:26:33 blockdev_raid5f -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=89776 00:18:47.190 15:26:33 blockdev_raid5f -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:18:47.190 15:26:33 blockdev_raid5f -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:18:47.190 15:26:33 blockdev_raid5f -- bdev/blockdev.sh@49 -- # waitforlisten 89776 00:18:47.190 15:26:33 blockdev_raid5f -- common/autotest_common.sh@835 -- # '[' -z 89776 ']' 00:18:47.190 15:26:33 blockdev_raid5f -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:47.190 15:26:33 blockdev_raid5f -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:47.190 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:47.190 15:26:33 blockdev_raid5f -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:47.190 15:26:33 blockdev_raid5f -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:47.190 15:26:33 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:47.450 [2024-11-20 15:26:33.743416] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:18:47.450 [2024-11-20 15:26:33.743548] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89776 ] 00:18:47.450 [2024-11-20 15:26:33.924771] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:47.710 [2024-11-20 15:26:34.050064] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:48.648 15:26:34 blockdev_raid5f -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:48.648 15:26:34 blockdev_raid5f -- common/autotest_common.sh@868 -- # return 0 00:18:48.648 15:26:34 blockdev_raid5f -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:18:48.648 15:26:34 blockdev_raid5f -- bdev/blockdev.sh@763 -- # setup_raid5f_conf 00:18:48.648 15:26:34 blockdev_raid5f -- bdev/blockdev.sh@279 -- # rpc_cmd 00:18:48.648 15:26:34 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.648 15:26:34 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:48.648 Malloc0 00:18:48.648 Malloc1 00:18:48.648 Malloc2 00:18:48.648 15:26:35 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.648 15:26:35 blockdev_raid5f -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:18:48.648 15:26:35 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.648 15:26:35 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:48.648 15:26:35 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.648 15:26:35 blockdev_raid5f -- bdev/blockdev.sh@777 -- # cat 00:18:48.648 15:26:35 blockdev_raid5f -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:18:48.648 15:26:35 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.648 15:26:35 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:48.908 15:26:35 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.908 15:26:35 blockdev_raid5f -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:18:48.908 15:26:35 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.908 15:26:35 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:48.908 15:26:35 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.908 15:26:35 blockdev_raid5f -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:18:48.908 15:26:35 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.908 15:26:35 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:48.909 15:26:35 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.909 15:26:35 blockdev_raid5f -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:18:48.909 15:26:35 blockdev_raid5f -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:18:48.909 15:26:35 blockdev_raid5f -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:18:48.909 15:26:35 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.909 15:26:35 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:48.909 15:26:35 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.909 15:26:35 blockdev_raid5f -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:18:48.909 15:26:35 blockdev_raid5f -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "771dc1a1-0be7-42ed-bdd5-c4ee6c220a1e"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "771dc1a1-0be7-42ed-bdd5-c4ee6c220a1e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "771dc1a1-0be7-42ed-bdd5-c4ee6c220a1e",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "60fda490-5b0f-4b6f-88c7-89588c580317",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "3b14b758-f37e-48f5-b0c5-100917c4d08e",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "0dd655ee-5607-4254-8b30-fcea2d40ac80",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:18:48.909 15:26:35 blockdev_raid5f -- bdev/blockdev.sh@786 -- # jq -r .name 00:18:48.909 15:26:35 blockdev_raid5f -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:18:48.909 15:26:35 blockdev_raid5f -- bdev/blockdev.sh@789 -- # hello_world_bdev=raid5f 00:18:48.909 15:26:35 blockdev_raid5f -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:18:48.909 15:26:35 blockdev_raid5f -- bdev/blockdev.sh@791 -- # killprocess 89776 00:18:48.909 15:26:35 blockdev_raid5f -- common/autotest_common.sh@954 -- # '[' -z 89776 ']' 00:18:48.909 15:26:35 blockdev_raid5f -- common/autotest_common.sh@958 -- # kill -0 89776 00:18:48.909 15:26:35 blockdev_raid5f -- common/autotest_common.sh@959 -- # uname 00:18:48.909 15:26:35 blockdev_raid5f -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:48.909 15:26:35 blockdev_raid5f -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89776 00:18:48.909 15:26:35 blockdev_raid5f -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:48.909 15:26:35 blockdev_raid5f -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:48.909 killing process with pid 89776 00:18:48.909 15:26:35 blockdev_raid5f -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89776' 00:18:48.909 15:26:35 blockdev_raid5f -- common/autotest_common.sh@973 -- # kill 89776 00:18:48.909 15:26:35 blockdev_raid5f -- common/autotest_common.sh@978 -- # wait 89776 00:18:52.205 15:26:38 blockdev_raid5f -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:18:52.205 15:26:38 blockdev_raid5f -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:18:52.205 15:26:38 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:18:52.205 15:26:38 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:52.205 15:26:38 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:52.205 ************************************ 00:18:52.205 START TEST bdev_hello_world 00:18:52.205 ************************************ 00:18:52.205 15:26:38 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:18:52.205 [2024-11-20 15:26:38.164757] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:18:52.205 [2024-11-20 15:26:38.164894] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89843 ] 00:18:52.205 [2024-11-20 15:26:38.345568] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:52.205 [2024-11-20 15:26:38.466561] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:52.773 [2024-11-20 15:26:38.987332] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:18:52.773 [2024-11-20 15:26:38.987392] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev raid5f 00:18:52.773 [2024-11-20 15:26:38.987415] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:18:52.773 [2024-11-20 15:26:38.987980] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:18:52.773 [2024-11-20 15:26:38.988141] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:18:52.773 [2024-11-20 15:26:38.988169] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:18:52.773 [2024-11-20 15:26:38.988243] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:18:52.773 00:18:52.773 [2024-11-20 15:26:38.988270] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:18:54.208 00:18:54.208 real 0m2.329s 00:18:54.208 user 0m1.957s 00:18:54.208 sys 0m0.249s 00:18:54.208 15:26:40 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:54.208 15:26:40 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:18:54.208 ************************************ 00:18:54.208 END TEST bdev_hello_world 00:18:54.208 ************************************ 00:18:54.208 15:26:40 blockdev_raid5f -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:18:54.208 15:26:40 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:54.208 15:26:40 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:54.208 15:26:40 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:54.208 ************************************ 00:18:54.208 START TEST bdev_bounds 00:18:54.208 ************************************ 00:18:54.208 15:26:40 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:18:54.208 15:26:40 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=89885 00:18:54.208 15:26:40 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:18:54.208 15:26:40 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:18:54.208 Process bdevio pid: 89885 00:18:54.208 15:26:40 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 89885' 00:18:54.208 15:26:40 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 89885 00:18:54.208 15:26:40 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 89885 ']' 00:18:54.208 15:26:40 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:54.208 15:26:40 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:54.208 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:54.208 15:26:40 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:54.208 15:26:40 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:54.208 15:26:40 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:18:54.208 [2024-11-20 15:26:40.578618] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:18:54.208 [2024-11-20 15:26:40.578766] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89885 ] 00:18:54.467 [2024-11-20 15:26:40.765531] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:54.467 [2024-11-20 15:26:40.888461] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:54.467 [2024-11-20 15:26:40.888483] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:54.467 [2024-11-20 15:26:40.888483] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:55.033 15:26:41 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:55.033 15:26:41 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:18:55.033 15:26:41 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:18:55.293 I/O targets: 00:18:55.293 raid5f: 131072 blocks of 512 bytes (64 MiB) 00:18:55.293 00:18:55.293 00:18:55.293 CUnit - A unit testing framework for C - Version 2.1-3 00:18:55.293 http://cunit.sourceforge.net/ 00:18:55.293 00:18:55.293 00:18:55.293 Suite: bdevio tests on: raid5f 00:18:55.293 Test: blockdev write read block ...passed 00:18:55.293 Test: blockdev write zeroes read block ...passed 00:18:55.293 Test: blockdev write zeroes read no split ...passed 00:18:55.293 Test: blockdev write zeroes read split ...passed 00:18:55.552 Test: blockdev write zeroes read split partial ...passed 00:18:55.552 Test: blockdev reset ...passed 00:18:55.552 Test: blockdev write read 8 blocks ...passed 00:18:55.552 Test: blockdev write read size > 128k ...passed 00:18:55.552 Test: blockdev write read invalid size ...passed 00:18:55.552 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:18:55.552 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:18:55.552 Test: blockdev write read max offset ...passed 00:18:55.552 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:18:55.552 Test: blockdev writev readv 8 blocks ...passed 00:18:55.552 Test: blockdev writev readv 30 x 1block ...passed 00:18:55.552 Test: blockdev writev readv block ...passed 00:18:55.552 Test: blockdev writev readv size > 128k ...passed 00:18:55.552 Test: blockdev writev readv size > 128k in two iovs ...passed 00:18:55.552 Test: blockdev comparev and writev ...passed 00:18:55.552 Test: blockdev nvme passthru rw ...passed 00:18:55.552 Test: blockdev nvme passthru vendor specific ...passed 00:18:55.552 Test: blockdev nvme admin passthru ...passed 00:18:55.552 Test: blockdev copy ...passed 00:18:55.552 00:18:55.552 Run Summary: Type Total Ran Passed Failed Inactive 00:18:55.552 suites 1 1 n/a 0 0 00:18:55.552 tests 23 23 23 0 0 00:18:55.552 asserts 130 130 130 0 n/a 00:18:55.552 00:18:55.552 Elapsed time = 0.604 seconds 00:18:55.552 0 00:18:55.552 15:26:41 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 89885 00:18:55.552 15:26:41 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 89885 ']' 00:18:55.552 15:26:41 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 89885 00:18:55.552 15:26:41 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:18:55.552 15:26:41 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:55.552 15:26:41 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89885 00:18:55.552 15:26:41 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:55.552 15:26:41 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:55.552 15:26:41 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89885' 00:18:55.552 killing process with pid 89885 00:18:55.552 15:26:41 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@973 -- # kill 89885 00:18:55.552 15:26:41 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@978 -- # wait 89885 00:18:56.929 15:26:43 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:18:56.929 00:18:56.929 real 0m2.879s 00:18:56.929 user 0m7.178s 00:18:56.929 sys 0m0.427s 00:18:56.929 15:26:43 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:56.929 ************************************ 00:18:56.929 END TEST bdev_bounds 00:18:56.929 ************************************ 00:18:56.929 15:26:43 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:18:57.188 15:26:43 blockdev_raid5f -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:18:57.188 15:26:43 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:18:57.188 15:26:43 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:57.188 15:26:43 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:57.188 ************************************ 00:18:57.188 START TEST bdev_nbd 00:18:57.188 ************************************ 00:18:57.188 15:26:43 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:18:57.188 15:26:43 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:18:57.188 15:26:43 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:18:57.188 15:26:43 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:57.188 15:26:43 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:18:57.188 15:26:43 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('raid5f') 00:18:57.188 15:26:43 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:18:57.188 15:26:43 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=1 00:18:57.188 15:26:43 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:18:57.188 15:26:43 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:18:57.188 15:26:43 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:18:57.188 15:26:43 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=1 00:18:57.188 15:26:43 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0') 00:18:57.188 15:26:43 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:18:57.188 15:26:43 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('raid5f') 00:18:57.188 15:26:43 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:18:57.188 15:26:43 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=89950 00:18:57.188 15:26:43 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:18:57.189 15:26:43 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 89950 /var/tmp/spdk-nbd.sock 00:18:57.189 15:26:43 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 89950 ']' 00:18:57.189 15:26:43 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:18:57.189 15:26:43 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:18:57.189 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:18:57.189 15:26:43 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:57.189 15:26:43 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:18:57.189 15:26:43 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:57.189 15:26:43 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:18:57.189 [2024-11-20 15:26:43.555879] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:18:57.189 [2024-11-20 15:26:43.556365] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:57.447 [2024-11-20 15:26:43.758825] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:57.447 [2024-11-20 15:26:43.875192] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:58.015 15:26:44 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:58.015 15:26:44 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:18:58.015 15:26:44 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock raid5f 00:18:58.015 15:26:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:58.015 15:26:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('raid5f') 00:18:58.015 15:26:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:18:58.015 15:26:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock raid5f 00:18:58.015 15:26:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:58.015 15:26:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('raid5f') 00:18:58.015 15:26:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:18:58.015 15:26:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:18:58.015 15:26:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:18:58.015 15:26:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:18:58.015 15:26:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:18:58.015 15:26:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f 00:18:58.274 15:26:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:18:58.274 15:26:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:18:58.274 15:26:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:18:58.274 15:26:44 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:18:58.274 15:26:44 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:18:58.274 15:26:44 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:58.274 15:26:44 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:58.274 15:26:44 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:18:58.274 15:26:44 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:18:58.274 15:26:44 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:58.274 15:26:44 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:58.274 15:26:44 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:58.274 1+0 records in 00:18:58.274 1+0 records out 00:18:58.274 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000448794 s, 9.1 MB/s 00:18:58.274 15:26:44 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:58.274 15:26:44 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:18:58.274 15:26:44 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:58.274 15:26:44 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:58.274 15:26:44 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:18:58.274 15:26:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:18:58.274 15:26:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:18:58.274 15:26:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:18:58.533 15:26:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:18:58.533 { 00:18:58.533 "nbd_device": "/dev/nbd0", 00:18:58.533 "bdev_name": "raid5f" 00:18:58.533 } 00:18:58.533 ]' 00:18:58.533 15:26:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:18:58.533 15:26:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:18:58.533 { 00:18:58.533 "nbd_device": "/dev/nbd0", 00:18:58.533 "bdev_name": "raid5f" 00:18:58.533 } 00:18:58.533 ]' 00:18:58.533 15:26:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:18:58.533 15:26:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:18:58.533 15:26:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:58.533 15:26:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:18:58.533 15:26:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:58.533 15:26:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:18:58.533 15:26:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:58.533 15:26:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:18:58.793 15:26:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:58.793 15:26:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:58.793 15:26:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:58.793 15:26:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:58.793 15:26:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:58.793 15:26:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:58.793 15:26:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:58.793 15:26:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:58.793 15:26:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:18:58.793 15:26:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:58.793 15:26:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:18:59.052 15:26:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:18:59.052 15:26:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:18:59.052 15:26:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:18:59.052 15:26:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:18:59.052 15:26:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:18:59.052 15:26:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:18:59.052 15:26:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:18:59.052 15:26:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:18:59.052 15:26:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:18:59.052 15:26:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:18:59.052 15:26:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:18:59.052 15:26:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:18:59.052 15:26:45 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:18:59.052 15:26:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:59.052 15:26:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('raid5f') 00:18:59.052 15:26:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:18:59.052 15:26:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0') 00:18:59.052 15:26:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:18:59.052 15:26:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:18:59.052 15:26:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:59.052 15:26:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('raid5f') 00:18:59.052 15:26:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:59.052 15:26:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:18:59.052 15:26:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:59.052 15:26:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:18:59.052 15:26:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:59.052 15:26:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:59.052 15:26:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f /dev/nbd0 00:18:59.312 /dev/nbd0 00:18:59.312 15:26:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:59.312 15:26:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:59.312 15:26:45 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:18:59.312 15:26:45 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:18:59.312 15:26:45 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:59.312 15:26:45 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:59.312 15:26:45 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:18:59.312 15:26:45 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:18:59.312 15:26:45 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:59.312 15:26:45 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:59.312 15:26:45 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:59.312 1+0 records in 00:18:59.312 1+0 records out 00:18:59.312 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00041248 s, 9.9 MB/s 00:18:59.312 15:26:45 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:59.312 15:26:45 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:18:59.312 15:26:45 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:59.312 15:26:45 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:59.312 15:26:45 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:18:59.312 15:26:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:59.312 15:26:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:59.312 15:26:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:18:59.312 15:26:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:59.312 15:26:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:18:59.571 15:26:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:18:59.571 { 00:18:59.571 "nbd_device": "/dev/nbd0", 00:18:59.571 "bdev_name": "raid5f" 00:18:59.571 } 00:18:59.571 ]' 00:18:59.571 15:26:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:18:59.571 15:26:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:18:59.571 { 00:18:59.571 "nbd_device": "/dev/nbd0", 00:18:59.571 "bdev_name": "raid5f" 00:18:59.571 } 00:18:59.571 ]' 00:18:59.571 15:26:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:18:59.571 15:26:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:18:59.571 15:26:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:18:59.571 15:26:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=1 00:18:59.571 15:26:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 1 00:18:59.571 15:26:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=1 00:18:59.571 15:26:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:18:59.571 15:26:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:18:59.571 15:26:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:18:59.571 15:26:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:18:59.571 15:26:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:18:59.571 15:26:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:18:59.571 15:26:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:18:59.571 15:26:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:18:59.571 256+0 records in 00:18:59.571 256+0 records out 00:18:59.571 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0118461 s, 88.5 MB/s 00:18:59.830 15:26:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:18:59.830 15:26:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:18:59.830 256+0 records in 00:18:59.830 256+0 records out 00:18:59.830 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0357957 s, 29.3 MB/s 00:18:59.830 15:26:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:18:59.830 15:26:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:18:59.830 15:26:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:18:59.830 15:26:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:18:59.830 15:26:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:18:59.830 15:26:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:18:59.830 15:26:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:18:59.830 15:26:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:18:59.830 15:26:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:18:59.830 15:26:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:18:59.830 15:26:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:18:59.830 15:26:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:59.830 15:26:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:18:59.830 15:26:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:59.830 15:26:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:18:59.830 15:26:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:59.830 15:26:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:19:00.089 15:26:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:00.089 15:26:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:00.089 15:26:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:00.089 15:26:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:00.089 15:26:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:00.089 15:26:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:00.089 15:26:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:00.089 15:26:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:00.089 15:26:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:19:00.089 15:26:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:00.089 15:26:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:19:00.352 15:26:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:19:00.352 15:26:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:19:00.352 15:26:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:19:00.352 15:26:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:19:00.352 15:26:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:19:00.352 15:26:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:19:00.352 15:26:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:19:00.352 15:26:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:19:00.352 15:26:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:19:00.352 15:26:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:19:00.352 15:26:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:19:00.352 15:26:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:19:00.352 15:26:46 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:19:00.352 15:26:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:00.352 15:26:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:19:00.352 15:26:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:19:00.621 malloc_lvol_verify 00:19:00.621 15:26:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:19:00.621 77982f08-7866-430f-9f37-0121b1b1aba7 00:19:00.621 15:26:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:19:00.879 bcd162ad-d150-4a62-90f0-5a0b3dcbe27e 00:19:00.879 15:26:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:19:01.138 /dev/nbd0 00:19:01.138 15:26:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:19:01.138 15:26:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:19:01.138 15:26:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:19:01.138 15:26:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:19:01.138 15:26:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:19:01.138 mke2fs 1.47.0 (5-Feb-2023) 00:19:01.138 Discarding device blocks: 0/4096 done 00:19:01.138 Creating filesystem with 4096 1k blocks and 1024 inodes 00:19:01.138 00:19:01.138 Allocating group tables: 0/1 done 00:19:01.138 Writing inode tables: 0/1 done 00:19:01.138 Creating journal (1024 blocks): done 00:19:01.138 Writing superblocks and filesystem accounting information: 0/1 done 00:19:01.138 00:19:01.138 15:26:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:19:01.138 15:26:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:01.138 15:26:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:01.138 15:26:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:01.138 15:26:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:19:01.138 15:26:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:01.138 15:26:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:19:01.397 15:26:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:01.397 15:26:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:01.397 15:26:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:01.397 15:26:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:01.397 15:26:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:01.397 15:26:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:01.397 15:26:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:01.397 15:26:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:01.397 15:26:47 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 89950 00:19:01.397 15:26:47 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 89950 ']' 00:19:01.397 15:26:47 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 89950 00:19:01.397 15:26:47 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:19:01.397 15:26:47 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:01.397 15:26:47 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89950 00:19:01.397 15:26:47 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:01.397 15:26:47 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:01.397 killing process with pid 89950 00:19:01.397 15:26:47 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89950' 00:19:01.397 15:26:47 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@973 -- # kill 89950 00:19:01.397 15:26:47 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@978 -- # wait 89950 00:19:03.299 15:26:49 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:19:03.299 00:19:03.299 real 0m5.827s 00:19:03.299 user 0m7.749s 00:19:03.299 sys 0m1.511s 00:19:03.299 15:26:49 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:03.299 15:26:49 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:19:03.299 ************************************ 00:19:03.299 END TEST bdev_nbd 00:19:03.299 ************************************ 00:19:03.299 15:26:49 blockdev_raid5f -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:19:03.299 15:26:49 blockdev_raid5f -- bdev/blockdev.sh@801 -- # '[' raid5f = nvme ']' 00:19:03.299 15:26:49 blockdev_raid5f -- bdev/blockdev.sh@801 -- # '[' raid5f = gpt ']' 00:19:03.299 15:26:49 blockdev_raid5f -- bdev/blockdev.sh@805 -- # run_test bdev_fio fio_test_suite '' 00:19:03.299 15:26:49 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:03.300 15:26:49 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:03.300 15:26:49 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:03.300 ************************************ 00:19:03.300 START TEST bdev_fio 00:19:03.300 ************************************ 00:19:03.300 15:26:49 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1129 -- # fio_test_suite '' 00:19:03.300 15:26:49 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:19:03.300 15:26:49 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:19:03.300 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:19:03.300 15:26:49 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:19:03.300 15:26:49 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:19:03.300 15:26:49 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:19:03.300 15:26:49 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:19:03.300 15:26:49 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:19:03.300 15:26:49 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:03.300 15:26:49 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=verify 00:19:03.300 15:26:49 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=AIO 00:19:03.300 15:26:49 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:19:03.300 15:26:49 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:19:03.300 15:26:49 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:19:03.300 15:26:49 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z verify ']' 00:19:03.300 15:26:49 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:19:03.300 15:26:49 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:03.300 15:26:49 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:19:03.300 15:26:49 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1317 -- # '[' verify == verify ']' 00:19:03.300 15:26:49 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1318 -- # cat 00:19:03.300 15:26:49 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1327 -- # '[' AIO == AIO ']' 00:19:03.300 15:26:49 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1328 -- # /usr/src/fio/fio --version 00:19:03.300 15:26:49 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1328 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:19:03.300 15:26:49 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1329 -- # echo serialize_overlap=1 00:19:03.300 15:26:49 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:19:03.300 15:26:49 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_raid5f]' 00:19:03.300 15:26:49 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=raid5f 00:19:03.300 15:26:49 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:19:03.300 15:26:49 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:19:03.300 15:26:49 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']' 00:19:03.300 15:26:49 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:03.300 15:26:49 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:19:03.300 ************************************ 00:19:03.300 START TEST bdev_fio_rw_verify 00:19:03.300 ************************************ 00:19:03.300 15:26:49 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:19:03.300 15:26:49 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:19:03.300 15:26:49 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:19:03.300 15:26:49 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:03.300 15:26:49 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local sanitizers 00:19:03.300 15:26:49 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:03.300 15:26:49 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # shift 00:19:03.300 15:26:49 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # local asan_lib= 00:19:03.300 15:26:49 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:19:03.300 15:26:49 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:03.300 15:26:49 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # grep libasan 00:19:03.300 15:26:49 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:19:03.300 15:26:49 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:19:03.300 15:26:49 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:19:03.300 15:26:49 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # break 00:19:03.300 15:26:49 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:03.300 15:26:49 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:19:03.300 job_raid5f: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:19:03.300 fio-3.35 00:19:03.300 Starting 1 thread 00:19:15.498 00:19:15.498 job_raid5f: (groupid=0, jobs=1): err= 0: pid=90151: Wed Nov 20 15:27:00 2024 00:19:15.498 read: IOPS=11.1k, BW=43.2MiB/s (45.3MB/s)(432MiB/10001msec) 00:19:15.498 slat (usec): min=19, max=194, avg=21.69, stdev= 2.29 00:19:15.498 clat (usec): min=10, max=427, avg=143.44, stdev=51.58 00:19:15.498 lat (usec): min=31, max=449, avg=165.12, stdev=51.81 00:19:15.498 clat percentiles (usec): 00:19:15.498 | 50.000th=[ 149], 99.000th=[ 239], 99.900th=[ 281], 99.990th=[ 351], 00:19:15.498 | 99.999th=[ 408] 00:19:15.498 write: IOPS=11.6k, BW=45.4MiB/s (47.6MB/s)(449MiB/9880msec); 0 zone resets 00:19:15.498 slat (usec): min=8, max=259, avg=18.40, stdev= 3.79 00:19:15.498 clat (usec): min=62, max=1248, avg=330.04, stdev=43.43 00:19:15.498 lat (usec): min=79, max=1508, avg=348.43, stdev=44.31 00:19:15.498 clat percentiles (usec): 00:19:15.498 | 50.000th=[ 334], 99.000th=[ 420], 99.900th=[ 553], 99.990th=[ 1037], 00:19:15.498 | 99.999th=[ 1254] 00:19:15.498 bw ( KiB/s): min=42616, max=49512, per=98.94%, avg=46023.58, stdev=2159.01, samples=19 00:19:15.498 iops : min=10654, max=12378, avg=11505.89, stdev=539.75, samples=19 00:19:15.498 lat (usec) : 20=0.01%, 50=0.01%, 100=12.02%, 250=38.12%, 500=49.78% 00:19:15.498 lat (usec) : 750=0.05%, 1000=0.01% 00:19:15.498 lat (msec) : 2=0.01% 00:19:15.498 cpu : usr=98.80%, sys=0.51%, ctx=30, majf=0, minf=9225 00:19:15.498 IO depths : 1=7.7%, 2=19.9%, 4=55.1%, 8=17.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:15.498 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:15.499 complete : 0=0.0%, 4=90.0%, 8=10.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:15.499 issued rwts: total=110604,114893,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:15.499 latency : target=0, window=0, percentile=100.00%, depth=8 00:19:15.499 00:19:15.499 Run status group 0 (all jobs): 00:19:15.499 READ: bw=43.2MiB/s (45.3MB/s), 43.2MiB/s-43.2MiB/s (45.3MB/s-45.3MB/s), io=432MiB (453MB), run=10001-10001msec 00:19:15.499 WRITE: bw=45.4MiB/s (47.6MB/s), 45.4MiB/s-45.4MiB/s (47.6MB/s-47.6MB/s), io=449MiB (471MB), run=9880-9880msec 00:19:16.066 ----------------------------------------------------- 00:19:16.066 Suppressions used: 00:19:16.066 count bytes template 00:19:16.066 1 7 /usr/src/fio/parse.c 00:19:16.066 834 80064 /usr/src/fio/iolog.c 00:19:16.066 1 8 libtcmalloc_minimal.so 00:19:16.066 1 904 libcrypto.so 00:19:16.066 ----------------------------------------------------- 00:19:16.066 00:19:16.066 00:19:16.066 real 0m12.905s 00:19:16.066 user 0m13.077s 00:19:16.066 sys 0m0.686s 00:19:16.067 15:27:02 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:16.067 15:27:02 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:19:16.067 ************************************ 00:19:16.067 END TEST bdev_fio_rw_verify 00:19:16.067 ************************************ 00:19:16.067 15:27:02 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:19:16.067 15:27:02 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:16.067 15:27:02 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:19:16.067 15:27:02 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:16.067 15:27:02 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=trim 00:19:16.067 15:27:02 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type= 00:19:16.067 15:27:02 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:19:16.067 15:27:02 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:19:16.067 15:27:02 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:19:16.067 15:27:02 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z trim ']' 00:19:16.067 15:27:02 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:19:16.067 15:27:02 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:16.067 15:27:02 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:19:16.067 15:27:02 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1317 -- # '[' trim == verify ']' 00:19:16.067 15:27:02 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1332 -- # '[' trim == trim ']' 00:19:16.067 15:27:02 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1333 -- # echo rw=trimwrite 00:19:16.067 15:27:02 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:19:16.067 15:27:02 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "771dc1a1-0be7-42ed-bdd5-c4ee6c220a1e"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "771dc1a1-0be7-42ed-bdd5-c4ee6c220a1e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "771dc1a1-0be7-42ed-bdd5-c4ee6c220a1e",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "60fda490-5b0f-4b6f-88c7-89588c580317",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "3b14b758-f37e-48f5-b0c5-100917c4d08e",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "0dd655ee-5607-4254-8b30-fcea2d40ac80",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:19:16.067 15:27:02 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:19:16.067 15:27:02 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:16.067 15:27:02 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:19:16.067 /home/vagrant/spdk_repo/spdk 00:19:16.067 15:27:02 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:19:16.067 15:27:02 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:19:16.067 00:19:16.067 real 0m13.191s 00:19:16.067 user 0m13.202s 00:19:16.067 sys 0m0.819s 00:19:16.067 15:27:02 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:16.067 15:27:02 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:19:16.067 ************************************ 00:19:16.067 END TEST bdev_fio 00:19:16.067 ************************************ 00:19:16.439 15:27:02 blockdev_raid5f -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:19:16.439 15:27:02 blockdev_raid5f -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:19:16.439 15:27:02 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:19:16.439 15:27:02 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:16.439 15:27:02 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:16.439 ************************************ 00:19:16.439 START TEST bdev_verify 00:19:16.439 ************************************ 00:19:16.439 15:27:02 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:19:16.439 [2024-11-20 15:27:02.692530] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:19:16.440 [2024-11-20 15:27:02.692675] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90315 ] 00:19:16.440 [2024-11-20 15:27:02.875369] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:16.707 [2024-11-20 15:27:03.003539] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:16.707 [2024-11-20 15:27:03.003568] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:17.270 Running I/O for 5 seconds... 00:19:19.139 12192.00 IOPS, 47.62 MiB/s [2024-11-20T15:27:06.998Z] 13158.50 IOPS, 51.40 MiB/s [2024-11-20T15:27:07.932Z] 13970.33 IOPS, 54.57 MiB/s [2024-11-20T15:27:08.868Z] 14317.75 IOPS, 55.93 MiB/s [2024-11-20T15:27:08.868Z] 14259.80 IOPS, 55.70 MiB/s 00:19:22.386 Latency(us) 00:19:22.386 [2024-11-20T15:27:08.868Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:22.386 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:19:22.386 Verification LBA range: start 0x0 length 0x2000 00:19:22.386 raid5f : 5.02 7121.04 27.82 0.00 0.00 26973.88 406.31 24003.55 00:19:22.386 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:22.386 Verification LBA range: start 0x2000 length 0x2000 00:19:22.387 raid5f : 5.02 7142.04 27.90 0.00 0.00 26914.49 183.42 24003.55 00:19:22.387 [2024-11-20T15:27:08.869Z] =================================================================================================================== 00:19:22.387 [2024-11-20T15:27:08.869Z] Total : 14263.08 55.72 0.00 0.00 26944.14 183.42 24003.55 00:19:23.762 00:19:23.762 real 0m7.450s 00:19:23.762 user 0m13.742s 00:19:23.762 sys 0m0.283s 00:19:23.762 15:27:10 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:23.762 15:27:10 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:19:23.762 ************************************ 00:19:23.762 END TEST bdev_verify 00:19:23.762 ************************************ 00:19:23.762 15:27:10 blockdev_raid5f -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:19:23.762 15:27:10 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:19:23.762 15:27:10 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:23.762 15:27:10 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:23.762 ************************************ 00:19:23.762 START TEST bdev_verify_big_io 00:19:23.762 ************************************ 00:19:23.762 15:27:10 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:19:23.762 [2024-11-20 15:27:10.218919] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:19:23.762 [2024-11-20 15:27:10.219050] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90416 ] 00:19:24.022 [2024-11-20 15:27:10.403576] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:24.281 [2024-11-20 15:27:10.523439] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:24.281 [2024-11-20 15:27:10.523471] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:24.861 Running I/O for 5 seconds... 00:19:26.734 693.00 IOPS, 43.31 MiB/s [2024-11-20T15:27:14.593Z] 855.50 IOPS, 53.47 MiB/s [2024-11-20T15:27:15.531Z] 909.00 IOPS, 56.81 MiB/s [2024-11-20T15:27:16.468Z] 935.75 IOPS, 58.48 MiB/s [2024-11-20T15:27:16.468Z] 965.00 IOPS, 60.31 MiB/s 00:19:29.986 Latency(us) 00:19:29.986 [2024-11-20T15:27:16.468Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:29.986 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:19:29.986 Verification LBA range: start 0x0 length 0x200 00:19:29.986 raid5f : 5.13 470.23 29.39 0.00 0.00 6727434.75 176.84 345314.18 00:19:29.986 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:19:29.986 Verification LBA range: start 0x200 length 0x200 00:19:29.986 raid5f : 5.22 486.02 30.38 0.00 0.00 6484883.57 165.32 326785.13 00:19:29.986 [2024-11-20T15:27:16.468Z] =================================================================================================================== 00:19:29.986 [2024-11-20T15:27:16.468Z] Total : 956.25 59.77 0.00 0.00 6603097.26 165.32 345314.18 00:19:31.401 00:19:31.401 real 0m7.627s 00:19:31.401 user 0m14.102s 00:19:31.401 sys 0m0.282s 00:19:31.401 15:27:17 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:31.401 15:27:17 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:19:31.401 ************************************ 00:19:31.401 END TEST bdev_verify_big_io 00:19:31.401 ************************************ 00:19:31.401 15:27:17 blockdev_raid5f -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:31.401 15:27:17 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:19:31.401 15:27:17 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:31.401 15:27:17 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:31.401 ************************************ 00:19:31.401 START TEST bdev_write_zeroes 00:19:31.401 ************************************ 00:19:31.401 15:27:17 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:31.660 [2024-11-20 15:27:17.930540] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:19:31.661 [2024-11-20 15:27:17.930752] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90519 ] 00:19:31.661 [2024-11-20 15:27:18.114564] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:31.919 [2024-11-20 15:27:18.240312] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:32.488 Running I/O for 1 seconds... 00:19:33.426 25647.00 IOPS, 100.18 MiB/s 00:19:33.426 Latency(us) 00:19:33.426 [2024-11-20T15:27:19.908Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:33.426 Job: raid5f (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:19:33.426 raid5f : 1.01 25611.85 100.05 0.00 0.00 4981.69 1500.22 7106.31 00:19:33.426 [2024-11-20T15:27:19.908Z] =================================================================================================================== 00:19:33.426 [2024-11-20T15:27:19.908Z] Total : 25611.85 100.05 0.00 0.00 4981.69 1500.22 7106.31 00:19:34.805 00:19:34.805 real 0m3.413s 00:19:34.805 user 0m2.993s 00:19:34.805 sys 0m0.288s 00:19:34.805 15:27:21 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:34.805 15:27:21 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:19:34.805 ************************************ 00:19:34.805 END TEST bdev_write_zeroes 00:19:34.805 ************************************ 00:19:35.064 15:27:21 blockdev_raid5f -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:35.064 15:27:21 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:19:35.064 15:27:21 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:35.064 15:27:21 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:35.064 ************************************ 00:19:35.064 START TEST bdev_json_nonenclosed 00:19:35.064 ************************************ 00:19:35.064 15:27:21 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:35.064 [2024-11-20 15:27:21.411160] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:19:35.064 [2024-11-20 15:27:21.411309] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90572 ] 00:19:35.324 [2024-11-20 15:27:21.590441] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:35.324 [2024-11-20 15:27:21.712711] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:35.324 [2024-11-20 15:27:21.712824] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:19:35.324 [2024-11-20 15:27:21.712856] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:19:35.324 [2024-11-20 15:27:21.712868] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:19:35.583 00:19:35.583 real 0m0.654s 00:19:35.583 user 0m0.405s 00:19:35.583 sys 0m0.145s 00:19:35.583 ************************************ 00:19:35.583 END TEST bdev_json_nonenclosed 00:19:35.583 ************************************ 00:19:35.583 15:27:21 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:35.583 15:27:21 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:19:35.583 15:27:22 blockdev_raid5f -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:35.583 15:27:22 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:19:35.583 15:27:22 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:35.583 15:27:22 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:35.583 ************************************ 00:19:35.583 START TEST bdev_json_nonarray 00:19:35.583 ************************************ 00:19:35.583 15:27:22 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:35.842 [2024-11-20 15:27:22.140589] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:19:35.842 [2024-11-20 15:27:22.140736] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90599 ] 00:19:35.842 [2024-11-20 15:27:22.320605] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:36.101 [2024-11-20 15:27:22.446270] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:36.101 [2024-11-20 15:27:22.446385] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:19:36.101 [2024-11-20 15:27:22.446410] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:19:36.101 [2024-11-20 15:27:22.446431] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:19:36.360 00:19:36.360 real 0m0.661s 00:19:36.360 user 0m0.413s 00:19:36.360 sys 0m0.143s 00:19:36.360 15:27:22 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:36.361 15:27:22 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:19:36.361 ************************************ 00:19:36.361 END TEST bdev_json_nonarray 00:19:36.361 ************************************ 00:19:36.361 15:27:22 blockdev_raid5f -- bdev/blockdev.sh@824 -- # [[ raid5f == bdev ]] 00:19:36.361 15:27:22 blockdev_raid5f -- bdev/blockdev.sh@832 -- # [[ raid5f == gpt ]] 00:19:36.361 15:27:22 blockdev_raid5f -- bdev/blockdev.sh@836 -- # [[ raid5f == crypto_sw ]] 00:19:36.361 15:27:22 blockdev_raid5f -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:19:36.361 15:27:22 blockdev_raid5f -- bdev/blockdev.sh@849 -- # cleanup 00:19:36.361 15:27:22 blockdev_raid5f -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:19:36.361 15:27:22 blockdev_raid5f -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:19:36.361 15:27:22 blockdev_raid5f -- bdev/blockdev.sh@26 -- # [[ raid5f == rbd ]] 00:19:36.361 15:27:22 blockdev_raid5f -- bdev/blockdev.sh@30 -- # [[ raid5f == daos ]] 00:19:36.361 15:27:22 blockdev_raid5f -- bdev/blockdev.sh@34 -- # [[ raid5f = \g\p\t ]] 00:19:36.361 15:27:22 blockdev_raid5f -- bdev/blockdev.sh@40 -- # [[ raid5f == xnvme ]] 00:19:36.361 00:19:36.361 real 0m49.397s 00:19:36.361 user 1m6.535s 00:19:36.361 sys 0m5.284s 00:19:36.361 15:27:22 blockdev_raid5f -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:36.361 15:27:22 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:36.361 ************************************ 00:19:36.361 END TEST blockdev_raid5f 00:19:36.361 ************************************ 00:19:36.620 15:27:22 -- spdk/autotest.sh@194 -- # uname -s 00:19:36.620 15:27:22 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:19:36.620 15:27:22 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:19:36.620 15:27:22 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:19:36.620 15:27:22 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:19:36.620 15:27:22 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:19:36.620 15:27:22 -- spdk/autotest.sh@260 -- # timing_exit lib 00:19:36.620 15:27:22 -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:36.620 15:27:22 -- common/autotest_common.sh@10 -- # set +x 00:19:36.620 15:27:22 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:19:36.620 15:27:22 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:19:36.620 15:27:22 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:19:36.620 15:27:22 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:19:36.620 15:27:22 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:19:36.620 15:27:22 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:19:36.620 15:27:22 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:19:36.620 15:27:22 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:19:36.620 15:27:22 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:19:36.620 15:27:22 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:19:36.620 15:27:22 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:19:36.620 15:27:22 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:19:36.620 15:27:22 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:19:36.620 15:27:22 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:19:36.620 15:27:22 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:19:36.620 15:27:22 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:19:36.620 15:27:22 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:19:36.620 15:27:22 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:19:36.620 15:27:22 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:19:36.620 15:27:22 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:19:36.620 15:27:22 -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:36.620 15:27:22 -- common/autotest_common.sh@10 -- # set +x 00:19:36.620 15:27:22 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:19:36.620 15:27:22 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:19:36.620 15:27:22 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:19:36.620 15:27:22 -- common/autotest_common.sh@10 -- # set +x 00:19:39.178 INFO: APP EXITING 00:19:39.178 INFO: killing all VMs 00:19:39.178 INFO: killing vhost app 00:19:39.178 INFO: EXIT DONE 00:19:39.178 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:39.178 Waiting for block devices as requested 00:19:39.438 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:19:39.438 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:19:40.383 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:40.383 Cleaning 00:19:40.383 Removing: /var/run/dpdk/spdk0/config 00:19:40.383 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:19:40.383 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:19:40.383 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:19:40.383 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:19:40.383 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:19:40.383 Removing: /var/run/dpdk/spdk0/hugepage_info 00:19:40.383 Removing: /dev/shm/spdk_tgt_trace.pid56758 00:19:40.383 Removing: /var/run/dpdk/spdk0 00:19:40.383 Removing: /var/run/dpdk/spdk_pid56517 00:19:40.383 Removing: /var/run/dpdk/spdk_pid56758 00:19:40.383 Removing: /var/run/dpdk/spdk_pid56992 00:19:40.383 Removing: /var/run/dpdk/spdk_pid57102 00:19:40.383 Removing: /var/run/dpdk/spdk_pid57158 00:19:40.383 Removing: /var/run/dpdk/spdk_pid57286 00:19:40.383 Removing: /var/run/dpdk/spdk_pid57304 00:19:40.383 Removing: /var/run/dpdk/spdk_pid57514 00:19:40.383 Removing: /var/run/dpdk/spdk_pid57631 00:19:40.383 Removing: /var/run/dpdk/spdk_pid57738 00:19:40.383 Removing: /var/run/dpdk/spdk_pid57860 00:19:40.383 Removing: /var/run/dpdk/spdk_pid57968 00:19:40.383 Removing: /var/run/dpdk/spdk_pid58013 00:19:40.642 Removing: /var/run/dpdk/spdk_pid58050 00:19:40.642 Removing: /var/run/dpdk/spdk_pid58120 00:19:40.642 Removing: /var/run/dpdk/spdk_pid58237 00:19:40.642 Removing: /var/run/dpdk/spdk_pid58690 00:19:40.642 Removing: /var/run/dpdk/spdk_pid58767 00:19:40.642 Removing: /var/run/dpdk/spdk_pid58841 00:19:40.642 Removing: /var/run/dpdk/spdk_pid58857 00:19:40.642 Removing: /var/run/dpdk/spdk_pid59019 00:19:40.642 Removing: /var/run/dpdk/spdk_pid59035 00:19:40.642 Removing: /var/run/dpdk/spdk_pid59189 00:19:40.642 Removing: /var/run/dpdk/spdk_pid59205 00:19:40.642 Removing: /var/run/dpdk/spdk_pid59280 00:19:40.642 Removing: /var/run/dpdk/spdk_pid59298 00:19:40.642 Removing: /var/run/dpdk/spdk_pid59363 00:19:40.643 Removing: /var/run/dpdk/spdk_pid59391 00:19:40.643 Removing: /var/run/dpdk/spdk_pid59594 00:19:40.643 Removing: /var/run/dpdk/spdk_pid59630 00:19:40.643 Removing: /var/run/dpdk/spdk_pid59719 00:19:40.643 Removing: /var/run/dpdk/spdk_pid61079 00:19:40.643 Removing: /var/run/dpdk/spdk_pid61291 00:19:40.643 Removing: /var/run/dpdk/spdk_pid61436 00:19:40.643 Removing: /var/run/dpdk/spdk_pid62084 00:19:40.643 Removing: /var/run/dpdk/spdk_pid62291 00:19:40.643 Removing: /var/run/dpdk/spdk_pid62431 00:19:40.643 Removing: /var/run/dpdk/spdk_pid63080 00:19:40.643 Removing: /var/run/dpdk/spdk_pid63405 00:19:40.643 Removing: /var/run/dpdk/spdk_pid63550 00:19:40.643 Removing: /var/run/dpdk/spdk_pid64935 00:19:40.643 Removing: /var/run/dpdk/spdk_pid65183 00:19:40.643 Removing: /var/run/dpdk/spdk_pid65336 00:19:40.643 Removing: /var/run/dpdk/spdk_pid66716 00:19:40.643 Removing: /var/run/dpdk/spdk_pid66973 00:19:40.643 Removing: /var/run/dpdk/spdk_pid67119 00:19:40.643 Removing: /var/run/dpdk/spdk_pid68494 00:19:40.643 Removing: /var/run/dpdk/spdk_pid68940 00:19:40.643 Removing: /var/run/dpdk/spdk_pid69084 00:19:40.643 Removing: /var/run/dpdk/spdk_pid70560 00:19:40.643 Removing: /var/run/dpdk/spdk_pid70819 00:19:40.643 Removing: /var/run/dpdk/spdk_pid70969 00:19:40.643 Removing: /var/run/dpdk/spdk_pid72451 00:19:40.643 Removing: /var/run/dpdk/spdk_pid72716 00:19:40.643 Removing: /var/run/dpdk/spdk_pid72866 00:19:40.643 Removing: /var/run/dpdk/spdk_pid74346 00:19:40.643 Removing: /var/run/dpdk/spdk_pid74829 00:19:40.643 Removing: /var/run/dpdk/spdk_pid74969 00:19:40.643 Removing: /var/run/dpdk/spdk_pid75118 00:19:40.643 Removing: /var/run/dpdk/spdk_pid75531 00:19:40.643 Removing: /var/run/dpdk/spdk_pid76263 00:19:40.643 Removing: /var/run/dpdk/spdk_pid76639 00:19:40.643 Removing: /var/run/dpdk/spdk_pid77349 00:19:40.643 Removing: /var/run/dpdk/spdk_pid77801 00:19:40.643 Removing: /var/run/dpdk/spdk_pid78571 00:19:40.643 Removing: /var/run/dpdk/spdk_pid78984 00:19:40.643 Removing: /var/run/dpdk/spdk_pid80946 00:19:40.643 Removing: /var/run/dpdk/spdk_pid81390 00:19:40.643 Removing: /var/run/dpdk/spdk_pid81830 00:19:40.902 Removing: /var/run/dpdk/spdk_pid83922 00:19:40.902 Removing: /var/run/dpdk/spdk_pid84402 00:19:40.902 Removing: /var/run/dpdk/spdk_pid84924 00:19:40.902 Removing: /var/run/dpdk/spdk_pid85974 00:19:40.902 Removing: /var/run/dpdk/spdk_pid86302 00:19:40.902 Removing: /var/run/dpdk/spdk_pid87234 00:19:40.902 Removing: /var/run/dpdk/spdk_pid87561 00:19:40.902 Removing: /var/run/dpdk/spdk_pid88496 00:19:40.902 Removing: /var/run/dpdk/spdk_pid88819 00:19:40.902 Removing: /var/run/dpdk/spdk_pid89500 00:19:40.902 Removing: /var/run/dpdk/spdk_pid89776 00:19:40.902 Removing: /var/run/dpdk/spdk_pid89843 00:19:40.902 Removing: /var/run/dpdk/spdk_pid89885 00:19:40.902 Removing: /var/run/dpdk/spdk_pid90136 00:19:40.902 Removing: /var/run/dpdk/spdk_pid90315 00:19:40.902 Removing: /var/run/dpdk/spdk_pid90416 00:19:40.902 Removing: /var/run/dpdk/spdk_pid90519 00:19:40.902 Removing: /var/run/dpdk/spdk_pid90572 00:19:40.902 Removing: /var/run/dpdk/spdk_pid90599 00:19:40.902 Clean 00:19:40.902 15:27:27 -- common/autotest_common.sh@1453 -- # return 0 00:19:40.902 15:27:27 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:19:40.902 15:27:27 -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:40.902 15:27:27 -- common/autotest_common.sh@10 -- # set +x 00:19:40.902 15:27:27 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:19:40.902 15:27:27 -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:40.903 15:27:27 -- common/autotest_common.sh@10 -- # set +x 00:19:41.162 15:27:27 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:19:41.162 15:27:27 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:19:41.162 15:27:27 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:19:41.162 15:27:27 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:19:41.162 15:27:27 -- spdk/autotest.sh@398 -- # hostname 00:19:41.162 15:27:27 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:19:41.162 geninfo: WARNING: invalid characters removed from testname! 00:20:03.097 15:27:49 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:05.633 15:27:52 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:08.169 15:27:54 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:10.074 15:27:56 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:12.610 15:27:58 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:14.530 15:28:00 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:16.435 15:28:02 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:20:16.435 15:28:02 -- spdk/autorun.sh@1 -- $ timing_finish 00:20:16.436 15:28:02 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:20:16.436 15:28:02 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:20:16.436 15:28:02 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:20:16.436 15:28:02 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:20:16.436 + [[ -n 5208 ]] 00:20:16.436 + sudo kill 5208 00:20:16.445 [Pipeline] } 00:20:16.464 [Pipeline] // timeout 00:20:16.470 [Pipeline] } 00:20:16.487 [Pipeline] // stage 00:20:16.493 [Pipeline] } 00:20:16.509 [Pipeline] // catchError 00:20:16.519 [Pipeline] stage 00:20:16.521 [Pipeline] { (Stop VM) 00:20:16.536 [Pipeline] sh 00:20:16.827 + vagrant halt 00:20:19.392 ==> default: Halting domain... 00:20:25.970 [Pipeline] sh 00:20:26.251 + vagrant destroy -f 00:20:29.542 ==> default: Removing domain... 00:20:29.555 [Pipeline] sh 00:20:29.839 + mv output /var/jenkins/workspace/raid-vg-autotest_2/output 00:20:29.848 [Pipeline] } 00:20:29.863 [Pipeline] // stage 00:20:29.868 [Pipeline] } 00:20:29.882 [Pipeline] // dir 00:20:29.887 [Pipeline] } 00:20:29.901 [Pipeline] // wrap 00:20:29.907 [Pipeline] } 00:20:29.920 [Pipeline] // catchError 00:20:29.929 [Pipeline] stage 00:20:29.932 [Pipeline] { (Epilogue) 00:20:29.945 [Pipeline] sh 00:20:30.228 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:20:35.514 [Pipeline] catchError 00:20:35.517 [Pipeline] { 00:20:35.531 [Pipeline] sh 00:20:35.814 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:20:35.814 Artifacts sizes are good 00:20:35.823 [Pipeline] } 00:20:35.837 [Pipeline] // catchError 00:20:35.851 [Pipeline] archiveArtifacts 00:20:35.859 Archiving artifacts 00:20:35.943 [Pipeline] cleanWs 00:20:35.954 [WS-CLEANUP] Deleting project workspace... 00:20:35.954 [WS-CLEANUP] Deferred wipeout is used... 00:20:35.961 [WS-CLEANUP] done 00:20:35.963 [Pipeline] } 00:20:35.981 [Pipeline] // stage 00:20:35.986 [Pipeline] } 00:20:36.002 [Pipeline] // node 00:20:36.008 [Pipeline] End of Pipeline 00:20:36.043 Finished: SUCCESS